Something New (Again)– Top Project Management Trends 2017

Atif Qureshi at Tasque, which I learned via Dave Gordon’s blog, went out to LinkedIn’s Project Management Community to ask for the latest tends in project management.  You can find the raw responses to his inquiry at his blog here.  What is interesting is that some of these latest trends are much like the old trends which, given continuity makes sense.  But it is instructive to summarize the ones that came up most often.  Note that while Mr. Qureshi was looking for ten trends, and taken together he definitely lists more than ten, there is a lot of overlap.  In total the major issues seem to the five areas listed below.

a.  Agile, its hybrids, and its practical application.

It should not surprise anyone that the latest buzzword is Agile.  But what exactly is it in its present incarnation?  There is a great deal of rising criticism, much of it valid, that it is a way for developers and software PMs to avoid accountability. Anyone ready Glen Alleman’s Herding Cat’s Blog is aware of the issues regarding #NoEstimates advocates.  As a result, there are a number hybrid implementations of Agile that has Agile purists howling and non-purists adapting as they always do.  From my observations, however, there is an Ur-Agile that is out there common to all good implementations and wrote about them previously in this blog back in 2015.  Given the time, I think it useful to repeat it here.

The best articulation of Agile that I have read recently comes from Neil Killick, whom I have expressed some disagreement on the #NoEstimates debate and the more cultish aspects of Agile in past posts, but who published an excellent post back in July (2015) entitled “12 questions to find out: Are you doing Agile Software Development?”

Here are Neil’s questions:

  1. Do you want to do Agile Software Development? Yes – go to 2. No – GOODBYE.
  2. Is your team regularly reflecting on how to improve? Yes – go to 3. No – regularly meet with your team to reflect on how to improve, go to 2.
  3. Can you deliver shippable software frequently, at least every 2 weeks? Yes – go to 4. No – remove impediments to delivering a shippable increment every 2 weeks, go to 3.
  4. Do you work daily with your customer? Yes – go to 5. No – start working daily with your customer, go to 4.
  5. Do you consistently satisfy your customer? Yes – go to 6. No – find out why your customer isn’t happy, fix it, go to 5.
  6. Do you feel motivated? Yes – go to 7. No – work for someone who trusts and supports you, go to 2.
  7. Do you talk with your team and stakeholders every day? Yes – go to 8. No – start talking with your team and stakeholders every day, go to 7.
  8. Do you primarily measure progress with working software? Yes – go to 9. No – start measuring progress with working software, go to 8.
  9. Can you maintain pace of development indefinitely? Yes – go to 10. No – take on fewer things in next iteration, go to 9.
  10. Are you paying continuous attention to technical excellence and good design? Yes – go to 11. No – start paying continuous attention to technical excellent and good design, go to 10.
  11. Are you keeping things simple and maximising the amount of work not done? Yes – go to 12. No – start keeping things simple and writing as little code as possible to satisfy the customer, go to 11.
  12. Is your team self-organising? Yes – YOU’RE DOING AGILE SOFTWARE DEVELOPMENT!! No – don’t assign tasks to people and let the team figure out together how best to satisfy the customer, go to 12.

Note that even in software development based on Agile you are still “provid(ing) value by independently developing IP based on customer requirements.”  Only you are doing it faster and more effectively.

With the possible exception of the “self-organizing” meme, I find that items through 11 are valid ways of identifying Agile.  Given that the list says nothing about establishing closed-loop analysis of progress says nothing about estimates or the need to monitor progress, especially on complex projects.  As a matter of fact one of the biggest impediments noted elsewhere in industry is the inability of Agile to scale.  This limitations exists in its most simplistic form because Agile is fine in the development of well-defined limited COTS applications and smartphone applications.  It doesn’t work so well when one is pushing technology while developing software, especially for a complex project involving hundreds of stakeholders.  One other note–the unmentioned emphasis in Agile is technical performance measurement, since progress is based on satisfying customer requirements.  TPM, when placed in the context of a world of limited resources, is the best measure of all.

b.  The integration of new technology into PM and how to upload the existing PM corporate knowledge into that technology.

This is two sides of the same coin.  There is always  debate about the introduction of new technologies within an organization and this debate places in stark contrast the differences between risk aversion and risk management.

Project managers, especially in the complex project management environment of aerospace & defense tend, in general, to be a hardy lot.  Consisting mostly of engineers they love to push the envelope on technology development.  But there is also a stripe of engineers among them that do not apply this same approach of measured risk to their project management and business analysis system.  When it comes to tracking progress, resource management, programmatic risk, and accountability they frequently enter the risk aversion mode–believing that the less eyes on what they do the more leeway they have in achieving the technical milestones.  No doubt this is true in a world of unlimited time and resources, but that is not the world in which we live.

Aside from sub-optimized self-interest, the seeds of risk aversion come from the fact that many of the disciplines developed around performance management originated in the financial management community, and many organizations still come at project management efforts from perspective of the CFO organization.  Such rice bowl mentality, however, works against both the project and the organization.

Much has been made of the wall of honor for those CIA officers that have given their lives for their country, which lies to the right of the Langley headquarters entrance.  What has not gotten as much publicity is the verse inscribed on the wall to the left:

“And ye shall know the truth and the truth shall make you free.”

      John VIII-XXXII

In many ways those of us in the project management community apply this creed to the best of our ability to our day-to-day jobs, and it lies as the basis for all of the management improvement from Deming’s concept of continuous process improvement, through the application of Six Sigma and other management improvement methods.  What is not part of this concept is that one will apply improvement only when a customer demands it, though they have asked politely for some time.  The more information we have about what is happening in our systems, the better the project manager and the project team is armed with applying the expertise which qualified the individuals for their jobs to begin with.

When it comes to continual process improvement one does not need to wait to apply those technologies that will improve project management systems.  As a senior management (and well-respected engineer) when I worked in Navy told me; “if my program managers are doing their job virtually every element should be in the yellow, for only then do I know that they are managing risk and pushing the technology.”

But there are some practical issues that all managers must consider when managing the risks in introducing new technology and determining how to bring that technology into existing business systems without completely disrupting the organization.  This takes–good project management practices that, for information systems, includes good initial systems analysis, identification of those small portions of the organization ripe for initial entry in piloting, and a plan of data normalization and rationalization so that corporate knowledge is not lost.  Adopting systems that support more open systems that militate against proprietary barriers also helps.

c.  The intersection of project management and business analysis and its effects.

As data becomes more transparent through methods of normalization and rationalization–and the focus shifts from “tools” to the knowledge that can be derived from data–the clear separation that delineated project management from business analysis in line-and-staff organization becomes further blurred.  Even within the project management discipline, the separation in categorization of schedule analysts from cost analysts from financial analyst are becoming impediments in fully exploiting the advantages in looking at all data that is captured and which affects project performance.

d.  The manner of handling Big Data, business intelligence, and analytics that result.

Software technologies are rapidly developing that break the barriers of self-contained applications that perform one or two focused operations or a highly restricted group of operations that provide functionality focused on a single or limited set of business processes through high level languages that are hard-coded.  These new technologies, as stated in the previous section, allow users to focus on access to data, making the interface between the user and the application highly adaptable and customizable.  As these technologies are deployed against larger datasets that allow for integration of data across traditional line-and-staff organizations, they will provide insight that will garner businesses competitive advantages and productivity gains against their contemporaries.  Because of these technologies, highly labor-intensive data mining and data engineering projects that were thought to be necessary to access Big Data will find themselves displaced as their cost and lack of agility is exposed.  Internal or contracted out custom software development devoted along these same lines will also be displaced just as COTS has displaced the high overhead associated with these efforts in other areas.  This is due to the fact that hardware and processes developments are constantly shifting the definition of “Big Data” to larger and larger datasets to the point where the term will soon have no practical meaning.

e.  The role of the SME given all of the above.

The result of the trends regarding technology will be to put the subject matter expert back into the driver’s seat.  Given adaptive technology and data–and a redefinition of the analyst’s role to a more expansive one–we will find that the ability to meet the needs of functionality and the user experience is almost immediate.  Thus, when it comes to business and project management systems, the role of Agile, while these developments reinforce the characteristics that I outlined above are made real, the weakness of its applicability to more complex and technical projects is also revealed.  It is technology that will reduce the risk associated with contract negotiation, processes, documentation, and planning.  Walking away from these necessary components to project management obfuscates and avoids the hard facts that oftentimes must be addressed.

One final item that Mr. Qureshi mentions in a follow-up post–and which I have seen elsewhere in similar forums–concerns operational security.  In deployment of new technologies a gatekeeper must be aware of whether that technology will not open the organization’s corporate knowledge to compromise.  Given the greater and more integrated information and knowledge garnered by new technology, as good managers it is incumbent to ensure these improvements do not translate into undermining the organization.

The Future — Data Focus vs. “Tools” Focus

The title in this case is from the Leonard Cohen song.

Over the last few months I’ve come across this issue quite a bit and it goes to the heart of where software technology is leading us.  The basic question that underlies this issue can be boiled down into the issue of whether software should be thought of as a set of “tools” or an overarching solution that can handle data in a way that the organization requires.  It is a fundamental question because what we call Big Data–despite all of the hoopla–is really a relative term that changes with hardware, storage, and software scalability.  What was Big Data in 1997 is not Big Data in 2016.

As Moore’s Law expands scalability at lower cost, organizations and SMEs are finding that the dedicated software tools at hand are insufficient to leverage the additional information that can be derived from that data.  The reason for this is simple.  A COTS tools publisher will determine the functionality required based on a structured set of data that is to be used and code to that requirement.  The timeframe is usually extended and the approach highly structured.  There are very good reasons for this approach in particular industries where structure is necessary and the environment is fairly stable.  The list of industries that fall into this category is rapidly becoming smaller.  Thus, there is a large gap that must be filled by workarounds, custom code, and suboptimized use of Excel.  Organizations and people cannot wait until the self-styled software SMEs get around to providing that upgrade two years from now so that people can do their jobs.

Thus, the focus must be shifted to data and the software technologies that maximize its immediate exploitation for business purposes to meet organizational needs.  The key here is the arise of Fourth Generation applications that leverage object oriented programming language that most closely replicate the flexibility of open source.  What this means is that in lieu of buying a set of “tools”–each focused on solving a specific problem stitched together by a common platform or through data transfer–that software that deals with both data and UI in an agnostic fashion is now available.

The availability of flexible Fourth Generation software is of great concern, as one would imagine, to incumbents who have built their business model on defending territory based on a set of artifacts provided in the software.  Oftentimes these artifacts are nothing more than automatically filled in forms that previously were filled in manually.  That model was fine during the first and second waves of automation from the 1980s and 1990s, but such capabilities are trivial in 2016 given software focused on data that can be quickly adapted to provide functionality as needed.  What this development also does is eliminate and make trivial those old checklists that IT shops used to send out in a lazy way of assessing relative capabilities of software to simplify the competitive range.

Tools restrict themselves to a subset of data by definition to provide a specific set of capabilities.  Software that expands to include any set of data and allows that data to be displayed and processed as necessary through user configuration adapts itself more quickly and effectively to organizational needs.  They also tend to eliminate the need for multiple “best-of-breed” toolset approaches that are not the best of any breed, but more importantly, go beyond the limited functionality and ways of deriving importance from data found in structured tools.  The reason for this is that the data drives what is possible and important, rather than tools imposing a well-trod interpretation of importance based on a limited set of data stored in a proprietary format.

An important effect of Fourth Generation software that provides flexibility in UI and functionality driven by the user is that it puts the domain SME back in the driver’s seat.  This is an important development.  For too long SMEs have had to content themselves with recommending and advocating for functionality in software while waiting for the market (software publishers) to respond.  Essential business functionality with limited market commonality often required that organizations either wait until the remainder of the market drove software publishers to meet their needs, finance expensive custom development (either organic or contracted), or fill gaps with suboptimized and ad hoc internal solutions.  With software that adapts its UI and functionality based on any data that can be accessed, using simple configuration capabilities, SMEs can fill these gaps with a consistent solution that maintains data fidelity and aids in the capture and sustainability of corporate knowledge.

Furthermore, for all of the talk about Agile software techniques, one cannot implement Agile using software languages and approaches that were designed in an earlier age that resists optimization of the method.  Fourth Generation software lends itself most effectively to Agile since configuration using simple object oriented language gets us to the ideal–without a reliance on single points of failure–of releasable solutions at the end of a two-week sprint.  No doubt there are developers out there making good money that may challenge this assertion, but they are the exceptions to the rule that prove the point.  An organization should be able to optimize the pool of contributors to solution development and rollout in supporting essential business processes.  Otherwise Agile is just a pretext to overcome suboptimized developmental approaches, software languages, and the self-interest of developers that can’t plan or produce a releasable product in a timely manner within budgetary constraints.

In the end the change in mindset from tools to data goes to the issue of who owns the data: the organization that creates and utilizes the data (the customer), or the proprietary software tool publishers?  Clearly the economics will win out in favor of the customer.  It is time to displace “tools” thinking.

Note:  I’ve revised the title of the blog for clarity.

New Directions — Fourth Generation apps, Agile, and the New Paradigm

The world is moving forward and Moore’s Law is accelerating in interesting ways on the technology side, which opens new opportunities, especially in software.  In the past I have spoken of the flexibility of Fourth Generation software, that is, software that doesn’t rely on structured hardcoding, but instead, is focused on the data to deliver information to the user in more interesting and essential ways.  I work in this area for my day job, and so using such technology has tipped over more than a few rice bowls.

The response from entrenched incumbents and those using similar technological approaches in the industry focused on “tools” capabilities has been to declare vices as virtues.  Hard-coded applications that require long-term development and structures, built on proprietary file and data structures are, they declare, the right way to do things.  “We provide value by independently developing IP based on customer requirements,” they declare.  It sounds very reasonable, doesn’t it?  Only one problem: you have to wait–oh–a year or two to get that chart or graph you need, to refresh that user interface, to expand functionality, and you will almost never be able to leverage the latest capabilities afforded by the doubling of computing capability every 12 to 24 months.  The industry is filled with outmoded, poorly supported, and obsolete “tools’ already.  Guess it’s time for a new one.

The motivation behind such assertions, of course, is to slow things down.  Not possessing the underlying technology to provide more, better, and more powerful functionality to the customer quicker and more flexibly based on open systems principles, that is, dealing with data in an agnostic manner, they use their position to try to hold up disruptive entries from leaving them far behind.  This is done, especially in the bureaucratic complexities of A&D and DoD project management, through professional organizations that are used as thinly disguised lobbying opportunities by software suppliers such as the NDIA, or by appeals to contracting rules that they hope will undermine the introduction of new technologies.

All of these efforts, of course, are blowing into the wind.  The economics of the new technologies is too compelling for anyone to last long in their job by partying like it’s still 1997 under the first wave of software solutions targeted at data silos and stove-piped specialization.

The new paradigm is built on Agile and those technologies that facilitate that approach.  In case my regular readers think that I have become one of the Cultists, bowing before the Manfesto That May Not Be Named, let me assure you that is not the case.  The best articulation of Agile that I have read recently comes from Neil Killick, whom I have expressed some disagreement on the #NoEstimates debate and the more cultish aspects of Agile in past posts, but who published an excellent post back in July entitled “12 questions to find out: Are you doing Agile Software Development?”

Here are Neil’s questions:

  1. Do you want to do Agile Software Development? Yes – go to 2. No – GOODBYE.
  2. Is your team regularly reflecting on how to improve? Yes – go to 3. No – regularly meet with your team to reflect on how to improve, go to 2.
  3. Can you deliver shippable software frequently, at least every 2 weeks? Yes – go to 4. No – remove impediments to delivering a shippable increment every 2 weeks, go to 3.
  4. Do you work daily with your customer? Yes – go to 5. No – start working daily with your customer, go to 4.
  5. Do you consistently satisfy your customer? Yes – go to 6. No – find out why your customer isn’t happy, fix it, go to 5.
  6. Do you feel motivated? Yes – go to 7. No – work for someone who trusts and supports you, go to 2.
  7. Do you talk with your team and stakeholders every day? Yes – go to 8. No – start talking with your team and stakeholders every day, go to 7.
  8. Do you primarily measure progress with working software? Yes – go to 9. No – start measuring progress with working software, go to 8.
  9. Can you maintain pace of development indefinitely? Yes – go to 10. No – take on fewer things in next iteration, go to 9.
  10. Are you paying continuous attention to technical excellence and good design? Yes – go to 11. No – start paying continuous attention to technical excellent and good design, go to 10.
  11. Are you keeping things simple and maximising the amount of work not done? Yes – go to 12. No – start keeping things simple and writing as little code as possible to satisfy the customer, go to 11.
  12. Is your team self-organising? Yes – YOU’RE DOING AGILE SOFTWARE DEVELOPMENT!! No – don’t assign tasks to people and let the team figure out together how best to satisfy the customer, go to 12.

Note that even in software development based on Agile you are still “provid(ing) value by independently developing IP based on customer requirements.”  Only you are doing it faster and more effectively.

Now imagine a software technology that is agnostic to the source of data, that does not require a staff of data scientists, development personnel, and SMEs to care and feed it; that allows multiple solutions to be released from the same technology; that allows for integration and cross-data convergence to gain new insights based on Knowledge Discovery in Databases (KDD) principles; and that provides shippable, incremental solutions every two weeks or as often as can be absorbed by the organization, but responsively enough to meet multiple needs of the organization at any one time.

This is what is known as disruptive value.  There is no stopping this train.  It is the new paradigm and it’s time to take advantage of the powerful improvements in productivity, organizational effectiveness, and predictive capabilities that it provides.  This is the power of technology combined with a new approach to “small” big data, or structured data, that is effectively normalized and rationalized to the point of breaking down proprietary barriers, hewing to the true meaning of making data–and therefore information–both open and accessible.

Furthermore, such solutions using the same data streams produced by the measurement of work can also be used to evaluate organizational and systems compliance (where necessary), and effectiveness.  Combined with an effective feedback mechanism, data and technology drive organizational improvement and change.  There is no need for another tool to layer with the multiplicity of others, with its attendant specialized training, maintenance, and dead-end proprietary idiosyncrasies.  On the contrary, such an approach is an impediment to data maximization and value.

Vices are still vices even in new clothing.  Time to come to the side of the virtues.

Brother Can You (Para)digm? — Four of the Latest Trends in Project Management

At the beginning of the year we are greeted with the annual list of hottest “project management trends” prognostications.  We are now three months into the year and I think it worthwhile to note the latest developments that have come up in project management meetings, conferences, and in the field.  Some of these are in alignment with what you may have seen in some earlier articles, but these are four that I find to be most significant thus far, and there may be a couple of surprises for you here.

a.  Agile and Waterfall continue to duke it out.  As the term Agile is adapted and modified to real world situations, the cult purists become shriller in attempting to enforce the Manifesto that may not be named.  In all seriousness, it is not as if most of these methods had not been used previously–and many of the methods, like scrum, also have their roots in Waterfall and earlier methods.  A great on-line overview and book on the elements of scrum can be found at Agile Learning Labs.  But there is a wide body of knowledge out there concerning social and organizational behavior that is useful in applying what works and doesn’t work.  For example, the observational science behind span of control, team building, the structure of the team in supporting organizational effectiveness, and the use of sprints in avoiding the perpetual death-spiral of adding requirements and not defining “done”, are best practices that identify successful teams (depending how you define success–keeping in mind that a successful team that produces the product often still fails as a going concern, and thus falls into obscurity).

All that being said, if you want to structure these best practices into a cohesive methodology, call it Agile, Waterfall or Harry, and can make money at it while helping people succeed in a healthy work environment, all power to you.  In IT, however, it is this last point that makes this particular controversy seem like we’ve been here before.  When woo-woo concepts like #NoEstimates and self-organization are thrown about, the very useful and empirical nature of the enterprise enters into magical thinking and ideology.  The mathematics of unsuccessful IT projects has not changed significantly since the shift to Agile.  From what one can discern from the so-called studies on the market, which are mostly anecdotal or based on unscientific surveys, somewhere north of 50% of IT projects fail, failure defined as behind schedule and over cost, or failing to meet functionality requirements.

Given this, Agile seems to be the latest belle to the ball and virtually any process improvement introducing scrum, teaming, and sprints seems to get the tag.  Still, there is much blood and thunder being expended for a result that amounts to the same (and probably less than the) mathematical chance of success as found in the coin flip.  I think for the remainder of the year the more acceptable and structured portions of Agile will get the nod.

b.  Business technology is now driving process.  This trend, I think, is why process improvements like Agile, that claim to be the panacea, cannot deliver on their promises.  As best practices they can help organizations avoid a net negative, but they rarely can provide a net positive.  Applying new processes and procedures while driving blind will still run you off the road.  The big story in 2015, I think, is the ability to handle big data and to integrate that data in a manner to more clearly reveal context to business stakeholders.  For years in A&D, DoD, governance, and other verticals engaged in complex, multi-year project management, we have seen the push and pull of interests regarding the amount of data that is delivered or reported.  With new technologies this is no longer an issue.  Delivering a 20GB file has virtually the same marginal cost as delivering a 10GB file.  Sizes smaller than 1G aren’t even worth talking about.

Recently I heard someone refer to the storage space required for all this immense data, it’s immense I tell you!  Well storage is cheap and large amounts of data can be accessed through virtual repositories using APIs and smart methods of normalizing data that requires integration at the level defined by the systems’ interrelationships.  There is more than one way to skin this cat, and more methods for handling bigger data are coming on-line every year.  Thus, the issue is not more or less data, but better data regardless of the size of the underlying file or table structure or the amount of information.  The first go-round of this process will require that all of the data available already in repositories be surveyed to determine how to optimize the information it contains.  Then, once transformed into intelligence, to determine the best manner of delivery so that it provides both significance and context to the decision maker.  For many organizations, this is the question that will be answered in 2015 and into 2016.  At that point it is the data that will dictate the systems and procedures needed to take advantage of this powerful advance in business intelligence.

c.  Cross-functional teams will soon morph into cross-functional team members.  As data originating from previously stove-piped competencies is integrated into a cohesive whole, the skillsets necessary to understand the data, know how to convert it into intelligence, and act appropriately on that intelligence will begin to shift to require a broader, multi-disciplinary understanding.  Businesses and organizations will soon find that they can no longer afford the specialist who only understands cost, schedule, risk, or any one aspect of the other various specialties that were dictated by the old line-and-staff and division of labor practices of the 20th century.  Businesses and organizations that place short term, shareholder, and equity holder interests ahead of the business will soon find themselves out of business in this new world.  The same will apply to organizations that continue to suppress and compartmentalize data.  This is because a cross-functional individual that can maximize the use of this new information paradigm requires education and development.  To achieve this goal dictates the need for the establishment of a learning organization, which requires investment and a long term view.  A learning organization exposes its members to become competent in each aspect of the business, with development including successive assignments of greater responsibility and complexity.  For the project management community, we will increasingly see the introduction of more Business Analysts and, I think, the introduction of the competency of Project Analyst to displace–at first–both cost analyst and schedule analyst.  Other competency consolidation will soon follow.

d.  The new cross-functional competencies–Business Analysts and Project Analysts–will take on an increasing role in design and deployment of technology solutions in the business.  This takes us full circle in our feedback loop that begins with big data driving process.  We are already seeing organizations that have implemented the new technologies and are taking advantage of new insights not only introducing new multi-disciplinary competencies, but also introducing new technologies that adapt the user environment to the needs of the business.  Once the business and project analyst has determined how to interact with the data and the systems necessary to the decision-making process that follows, adaptable technologies that do not take the hard-coded “one size fits all” user interfaces are, and will continue, to find wide acceptance.  Fewer off-line and one-off utilities that have been used to fill the gaps resulting from the deficiencies in inflexible hard-coded business applications will allow innovative approaches to analysis to be mainstreamed into the organization.  Once again, we are already seeing this effect in 2015 and the trend will only accelerate as possessing greater technological knowledge becomes an essential element of being an analyst.

Despite dire predictions regarding innovation, it appears that we are on the cusp of another rapid shift in organizational transformation.  The new world of big data comes with both great promise and great risks.  For project management organizations, the key in taking advantage of its promise and minimizing its risks is to stay ahead of the transformation by embracing it and leading the organization into positioning itself to reap its benefits.

Mo’Better Risk — Tournaments and Games of Failure Part II

My last post discussed economic tournaments and games of failure in how they describe the success and failure of companies, with a comic example for IT start-up companies.  Glen Alleman at his Herding Cats blog has a more serious response in handily rebutting those who believe that #NoEstimates, Lean, Agile, and other cult-like fads can overcome the bottom line, that is, apply a method to reduce inherent risk and drive success.  As Glen writes:

“It’s about the money. It’s always about the money. Many want it to be about them or their colleagues, or the work environment, or the learning opportunities, or the self actualization.” — Glen Alleman, Herding Cats

Perfectly good products and companies fail all the time.  Oftentimes the best products fail to win the market, or do so only fleetingly.  Just think of the roles of the dead (or walking dead) over the years:  Novell, WordPerfect, Visicalc, Harvard Graphics; the list can go on and on.  Thus, one point that I would deviate from Glen is that it is not always EBITDA.  If that were true then both Facebook and Amazon would not be around today.  We see tremendous payouts to companies with promising technologies acquired for outrageous sums of money, though they have yet to make a profit.  But for every one of these there are many others that see the light of day for a moment and then flicker out of existence

So what is going on and how does this inform our knowledge of project management?  For the measure of our success is time and money, in most cases.  Obviously not all cases.  I’ve given two cases of success that appeared to be failure in previous posts to this blog: the M1A1 Tank and the ACA.  The reason why these “failures” were misdiagnosed was that the agreed measure(s) of success were incorrect.  Knowing this difference, where, and how it applies is important.

So how do tournaments and games of failure play a role in project management?  I submit that the lesson learned from these observations is that we see certain types of behaviors that are encouraged that tend to “bake” certain risks into our projects.  In high tech we know that there will be a thousand failures for every success, but it is important to keep the players playing–at least it is in the interest of the acquiring organization to do so, and is in the public interest in many cases as well.  We also know that most IT projects by most measures–both contracted out and organic–tend to realize a high rate of failure.  But if you win an important contract or secure an important project, the rewards can be significant.

The behaviors that are reinforced in this scenario on the part of the competing organization is to underestimate the cost and time involved in the effort; that is, so-called “bid to win.”  On the acquiring organization’s part, contracting officers lately have been all too happy to award contracts they know to be too low (and normally out of the competitive range) even though they realize it to be significantly below the independent estimate.  Thus “buying in” provides a significant risk that is hard to overcome.

Other behaviors that we see given the project ecosystem are the bias toward optimism and requirements instability.

In the first case, bias toward optimism, we often hear project and program managers dismiss bad news because it is “looking in the rear view mirror.”  We are “exploring,” we are told, and so the end state will not be dictated by history.  We often hear a version of this meme in cases where those in power wish to avoid accountability.  “Mistakes were made” and “we are focused on the future” are attempts to change the subject and avoid the reckoning that will come.  In most cases, however, particularly in project management, the motivations are not dishonest but, instead, sociological and psychological.  People who tend to build things–engineers in general, software coders, designers, etc.–tend to be an optimistic lot.  In very few cases will you find one of them who will refuse to take on a challenge.  How many cases have we presented a challenge to someone with these traits and heard the refrain:  “I can do that.”?  This form of self-delusion can be both an asset and a risk.  Who but an optimist would take on any technically challenging project?  But this is also the trait that will keep people working to the bitter end in a failure that places the entire enterprise at risk.

I have already spent some bits in previous posts regarding the instability of requirements, but this is part and parcel of the traits that we see within this framework.  Our end users determine that given how things are going we really need additional functionality, features, or improvements prior to the product roll out.  Our technical personnel will determine that for “just a bit more effort” they can achieve a higher level of performance or add capabilities at marginal or tradeoff cost.  In many cases, given the realization that the acquisition was a buy-in, project and program managers allow great latitude in accepting as a change an item that was assumed to be in the original scope.

There is a point where one or more of these factors is “baked in” into the course that the project will take.  We can delude ourselves into believing that we can change the course of the trajectory of the system through the application of methods: Agile, Lean, Six Sigma, PMBOK, etc. but, in the end, if we exhaust our resources without a road map on how to do this we will fail.  Our systems must be powerful and discrete enough to note the trend that is “baked in” due to factors in the structure and architecture of the effort being undertaken.  This is the core risk that must be managed in any undertaking.  A good example that applies to a complex topic like Global Warming was recently illustrated by Neil deGrasse Tyson in the series Cosmos:

In this example Dr. Tyson is climate and the dog is the weather.  But in our own analogy Dr. Tyson can be the trajectory of the system with the dog representing the “noise” of periodic indicators and activity around the effort.  We often spend a lot of time and effort (which I would argue is largely unproductive) on influencing these transient conditions in simpler systems rather than on the core inertia of the system itself.  That is where the risk lies. Thus, not all indicators are the same.  Some are measuring transient anomalies that have nothing to do with changing the core direction of the system, others are more valuable.  These latter indicators are the ones that we need to cultivate and develop, and they reside in an initial measurement of the inherent risk of the system largely based on its architecture that is antecedent to the start of the work.

This is not to say that we can do nothing about the trajectory.  A simpler system can be influenced more easily.  We cannot recover the effort already expended–which is why even historical indicators are important.  It is because they inform our future expectations and, if we pay attention to them, they keep us grounded in reality.  Even in the case of Global Warming we can change, though gradually, what will be a disastrous result if we allow things to continue on their present course.  In a deterministic universe we can influence the outcomes based on the contingent probabilities presented to us over time.  Thus, we will know if we have handled the core risk of the system by focusing on these better indicators as the effort progresses.  This will affect its trajectory.

Of course, a more direct way of modifying these risks is to make systemic adjustments.  Do we really need a tournament-based system as it exists and is the waste inherent in accepting so much failure really necessary?  What would that alternative look like?

Mo’Better Risk — Tournaments and Games of Failure Part II

My last post discussed economic tournaments and games of failure in how they describe the success and failure of companies, with an comic example for IT start-up companies.  Glen Alleman at his Herding Cats blog has a more serious response in handily rebutting those who believe that #NoEstimates, Lean, Agile, and other cult-like fads can overcome the bottom line, that is, apply a method to reduce inherent risk and drive success.  As Glen writes:

“It’s about the money. It’s always about the money. Many want it to be about them or their colleagues, or the work environment, or the learning opportunities, or the self actualization.” — Glen Alleman, Herding Cats

Perfectly good products and companies fail all the time.  Oftentimes the best products fail to win the market, or do so only fleetingly.  Just think of the roles of the dead (or walking dead) over the years:  Novell, WordPerfect, Visicalc, Harvard Graphics; the list can go on and on.  Thus, one point that I would deviate from Glen is that it is not always EBITDA.  If that were true then both Facebook and Amazon would not be around today.  We see tremendous payouts to companies with promising technologies acquired for outrageous sums of money, though they have yet to make a profit.  But for every one of these there are many others that see the light of day for a moment and then flicker out of existence

So what is going on and how does this inform our knowledge of project management?  For the measure of our success is time and money, in most cases.  Obviously not all cases.  I’ve given two cases of success that appeared to be failure in previous posts to this blog: the M1A1 Tank and the ACA.  The reason why these “failures” were misdiagnosed was that the agreed measure(s) of success were incorrect.  Knowing this difference, where, and how it applies is important.

So how do tournaments and games of failure play a role in project management?  I submit that the lesson learned from these observations is that we see certain types of behaviors that are encouraged that tend to “bake” certain risks into our projects.  In high tech we know that there will be a thousand failures for every success, but it is important to keep the players playing–at least it is in the interest of the acquiring organization to do so, and is in the public interest in many cases as well.  We also know that most IT projects by most measures–both contracted out and organic–tend to realize a high rate of failure.  But if you win an important contract or secure an important project, the rewards can be significant.

The behaviors that are reinforced in this scenario on the part of the competing organization is to underestimate the cost and time involved in the effort; that is, so-called “bid to win.”  On the acquiring organization’s part, contracting officers lately have been all too happy to award contracts they know to be too low (and normally out of the competitive range) even though they realize it to be significantly below the independent estimate.  Thus “buying in” provides a significant risk that is hard to overcome.

Other behaviors that we see given the project ecosystem are the bias toward optimism and requirements instability.

In the first case, bias toward optimism, we often hear project and program managers dismiss bad news because it is “looking in the rear view mirror.”  We are “exploring,” we are told, and so the end state will not be dictated by history.  We often hear a version of this meme in cases where those in power wish to avoid accountability.  “Mistakes were made” and “we are focused on the future” are attempts to change the subject and avoid the reckoning that will come.  In most cases, however, particularly in project management, the motivations are not dishonest but, instead, sociological and psychological.  People who tend to build things–engineers in general, software coders, designers, etc.–tend to be an optimistic lot.  In very few cases will you find one of them who will refuse to take on a challenge.  How many cases have we presented a challenge to someone with these traits and heard the refrain:  “I can do that.”?  This form of self-delusion can be both an asset and a risk.  Who but an optimist would take on any technically challenging project?  But this is also the trait that will keep people working to the bitter end in a failure that places the entire enterprise at risk.

I have already spent some bits in previous posts regarding the instability of requirements, but this is part and parcel of the traits that we see within this framework.  Our end users determine that given how things are going we really need additional functionality, features, or improvements prior to the product roll out.  Our technical personnel will determine that for “just a bit more effort” they can achieve a higher level of performance or add capabilities at marginal or tradeoff cost.  In many cases, given the realization that the acquisition was a buy-in, project and program managers allow great latitude in accepting as a change an item that was assumed to be in the original scope.

There is a point where one or more of these factors is “baked in” into the course that the project will take.  We can delude ourselves into believing that we can change the course of the trajectory of the system through the application of methods: Agile, Lean, Six Sigma, PMBOK, etc. but, in the end, if we exhaust our resources without a road map on how to do this we will fail.  Our systems must be powerful and discrete enough to note the trend that is “baked in” due to factors in the structure and architecture of the effort being undertaken.  This is the core risk that must be managed in any undertaking.  A good example that applies to a complex topic like Global Warming was recently illustrated by Neil deGrasse Tyson in the series Cosmos:

In this example Dr. Tyson is climate and the dog is the weather.  But in our own analogy Dr. Tyson can be the trajectory of the system with the dog representing the “noise” of periodic indicators and activity around the effort.  We often spend a lot of time and effort (which I would argue is largely unproductive) on influencing these transient conditions in simpler systems rather than on the core inertia of the system itself.  That is where the risk lies. Thus, not all indicators are the same.  Some are measuring transient anomalies that have nothing to do with changing the core direction of the system, others are more valuable.  These latter indicators are the ones that we need to cultivate and develop, and they reside in an initial measurement of the inherent risk of the system largely based on its architecture that is antecedent to the start of the work.

This is not to say that we can do nothing about the trajectory.  A simpler system can be influenced more easily.  We cannot recover the effort already expended–which is why even historical indicators are important.  It is because they inform our future expectations and, if we pay attention to them, they keep us grounded in reality.  Even in the case of Global Warming we can change, though gradually, what will be a disastrous result if we allow things to continue on their present course.  In a deterministic universe we can influence the outcomes based on the contingent probabilities presented to us over time.  Thus, we will know if we have handled the core risk of the system by focusing on these better indicators as the effort progresses.  This will affect its trajectory.

Of course, a more direct way of modifying these risks is to make systemic adjustments.  Do we really need a tournament-based system as it exists and is the waste inherent in accepting so much failure really necessary?  What would that alternative look like?

We Gotta Get Out of This Place — Are Our Contracting Systems Agile Enough?

The question in the title refers to agile in the “traditional” sense and not the big “A” appropriated sense.  But I’ll talk about big “A” Agile also.

It also refers to a number of discussions I have been engaged in recently among some of the leading practitioners in the program and project management community. Here are few data points:

a.  GAO and other oversight agencies have been critical of changing requirements over the life cycle of a project, particularly in DoD and other federal agencies, that contribute to cost growth.  The defense of these changes has been that many of them were necessary in order to meet new circumstances.  Okay, sounds fair enough.

But to my way of thinking, if the change(s) were necessary to keep the project from being obsolete upon deployment of the system, or were to correct an emergent threat that would have undermined project success and its rationale, then by all means we need to course correct.  But if the changes were not to address either of those scenarios, but simply to improve the system at more than marginal cost, then it was unnecessary.

How can I make such a broad statement and what is the alternative? we may ask.  My rationale is that the change or changes, if representing a new development involving significant funding, should stand on its own merits, since it is essentially a new project.

All of us who have been involved in complex projects have seen cases where, as a result of development (and quite often failure), that oftentimes we discover new methods and technologies within the present scope that garner an advantage not previously anticipated.  This doesn’t happen as often as we’d like but it does happen.  In my own survey and project in development of a methodology for incorporating technical performance into project cost, schedule and risk assessments, it was found that failing a test, for example, had value since it allowed engineers to determine pathways for not only achieving the technical objective but, oftentimes, exceeding the parameter.  We find that for x% more in investment as a result of the development, test, milestone review, etc. that we can improve the performance of some aspect of the system.  In that case, if the cost or effort is marginal then, the improvement is part of the core development process within the original scope.  Limited internal replanning may be necessary to incorporate the change but the remainder of the project can largely go along as planned.

Alternatively, however, inserting new effort in the form of changes to major subsystems involves major restructuring of the project.  This disrupts the business rhythm of the project, causing a cultural shift within the project team to socialize the change, and to incorporate the new work.  Change of this type not only causes what is essentially a reboot of the project, but also tends to add risk to the project and program.  This new risk will manifest itself as cost risk initially, but given risk handling, will also manifest itself into technical and schedule risk.

The result of this decision, driven solely by what may seem to be urgent operational considerations, is to undermine project and program timeliness since there is a financial impact to these decisions.  Thus, when you increase risk to a program the reaction of the budget holder is to provide an incentive to the program manager to manage risk more closely.  This oftentimes will invite what, in D.C. parlance, is called a budget mark, but to the rest of us is called a budget cut.  When socialized within the project, such cuts usually are then taken out of management reserve or non-mandatory activities that were put in place as contingencies to handle overall program risk at inception.  The mark is usually equal to the amount of internal risk caused by the requirements change.  Thus, adding risk is punished, not rewarded, because money is finite and must be applied to projects and programs that demonstrate that they can execute the scope against the plan and expend the funds provided to them.  So the total scope (and thus cost) of the project will increase, but the flexibility within the budget base will decrease since all of that money is now committed to handle risk.  Unanticipated risk, therefore, may not be effectively handled in the future.

At first the application of a budget mark in this case may seem counterintuitive, and when I first went through the budget hearing process it certainly did to me.  That is until one realizes that at each level the budget holder must demonstrate that the funds are being used for their intended purpose.  There can be no “banking” of money since each project and program must compete for the dollars available at any one time–it’s not the PM’s money, he or she has use of that money to provide the intended system.  Unfortunately, piggybacking significant changes (and constructive changes) to the original scope is common in project management.  Customers want what they want and business wants that business.  (More on this below).  As a result, the quid pro quo is: you want this new thing?  okay, but you will now have to manage risk based on the introduction of new requirements.  Risk handling, then, will most often lead to increased duration.  This can and often does result in a non-virtuous spiral in which requirements changes lead to cost growth and project risk, which lead to budget marks that restrict overall project flexibility, which tend to lead to additional duration.  A project under these circumstances finds itself either pushed to the point of not being deployed, or being deployed many years after the system needed to be in place, at much greater overall cost than originally anticipated.

As an alternative, by making improvements stand on their own merits a proper cost-benefit analysis can be completed to determine if the improvement is timely and how it measures up against the latest alternative technologies available.  It becomes its own project and not a parasite feeding off of the main effort.  This is known as the iterative approach and those in software development know it very well: you determine the problem that needs to be solved, figure out the features and approach that provides the 80% solution, and work to get it done.  Improvements can come after version 1.0–coding is not a welfare program for developers as the Agile Cult would have it.  The ramifications for project and program managers is apparent: they must not only be aware of the operational and technical aspects of their efforts, but also know the financial impacts of their decisions and take those into account.  Failure to do so is a recipe for self-inflicted disaster.

This leads us to the next point.

b.  In the last 20+ years major projects have found that the time from initial development to production has increased several times.  For example, the poster child for this phenomenon in the military services is the F35 Lightning II jet fighter, also known as the Joint Strike Fighter (JSF), which will continue to be in development at least through 2019 and perhaps into 2021.  From program inception in 2001 to Initial Operational Capability (IOC) it will be 15 years, at least, before the program is ready to deploy and go to production.  This scenario is being played out across the board in both government and industry for large projects of all types with few exceptions.  In particular, software projects tend to either fail or to meet their operational goals in the overwhelming majority of cases.  This would suggest that, aside from the typical issues of configuration control, project stability, and rubber baselining, (aside from the self-reinforcing cost growth culture of the Agile Cult) that there are larger underlying causes involved than simply contracting systems, though they are probably a contributing factor.

From a hardware perspective in terms of military strategy there may be a very good reason why it doesn’t matter that certain systems are not deployed immediately.  That reason is that, once deployed, they are expensive to maintain logistically.  Logistics of deployed systems will compete for dollars that could be better spent in developing–but not deploying–new technologies.  The answer, of course, is somewhere in between.  You can’t use that notional jet fighter when you needed it half a world away yesterday.

c.  Where we can see the effects on behavior from an acquisition systems perspective is in the comparison of independent estimates to what is eventually negotiated.  For example, one military service recently gave the example of a program in which the confidential independent estimate was $2.1 billion.  The successful commercial contractor team, let’s call them Team A, whose proposal was deemed technically acceptable, made an offer at $1.2 billion while the unsuccessful contractor team, Team B, offered near the independent estimate.  Months later, thanks to constructive changes, the eventual cost of the contract will be at or slightly above the independent estimate based on an apples-to-apples comparison of the scope.  Thus it is apparent that Team A bought into the contract.  Apparently, honesty in proposal pricing isn’t always the best policy.

I have often been asked what the rationale could be for a contractor to “buy-in” particularly for such large programs involving so much money.  The answer, of course, is “it depends.”  Team A could have the technological lead in the systems being procured and were defending their territory, thus buying-in, even without constructive changes, was deemed to be worth the tradeoff.  Perhaps Team A was behind in the technologies involved and would use the contract as a means of financing their gap.  Team A could have an excess of personnel with technical skills that are complimentary to those needed for the effort but who are otherwise not employed within their core competency, so rather than lose them it was worth bidding at or near cost for the perceived effort.  These are, of course, the most charitable assumed rationales, though the ones that I have most often encountered.

The real question in this case would be how, even given the judgment of the technical assessment team, the contracting officer would keep a proposal so far below the independent estimate to fall within the competitive range?  If the government’s requirements are so vague that two experienced contracting teams can fall so far apart, it should be apparent that the solicitation either defective or the scope is not completely understood.

I think it is this question that leads us to the more interesting aspects of acquisition, program, and project management.  For one, I am certain that a large acquisition like the one described is highly visible and of import to the political system and elected officials.  In the face of such scrutiny it would have to be a procuring contacting officer (PCO) of great experience and internal fortitude, confident in their judgment, to reset the process after proposals had been received.

There is also pressure in contracting from influencers within the requiring organizations that are under pressure to deploy systems to meet their needs as expeditiously as possible–especially after a fairly lengthy set of activities that must occur prior to the issuance of a solicitation.  The development of a good set of requirements is a process that involves multiple stakeholders on highly technical issues is one that requires a great deal of coordination and development by a centralized authority.  Absent such guidance the method of approaching requirements can be defective from the start.  For example, does the requiring organization write a Statement of Work, a Performance Work Statement, or a Statement of Objectives?  Which is most appropriate contract type for the work being performed and the risk involved?  Should there be one overriding approach or a combination of approaches based on the subsystems that make up the entire system?

But even given all of these internal factors there are others that are unique to our own time.  I think it would be interesting to see how these factors have affected the conditions that everyone in our discipline deems to be problematic.  This includes the reduced diversity of the industrial and information verticals upon which the acquisition and logistics systems rely; the erosion of domestic sources of expertise, manufactured materials, and commodities; the underinvestment in training and personnel development and retention within government that undermines necessary expertise; specialization within the contracting profession that separates the stages of acquisition into stovepipes that undermines continuity and cohesiveness; the issuance of patent monopolies that stifle and restrict competition and innovation; and unproductive rent seeking behavior on the part of economic elites that undermine the effectiveness of R&D and production-centric companies.  Finally, this also includes those government policies instituted since the early 1980s that support these developments.

The importance of any of these cannot be understated but let’s take the issue of rent seeking that has caused the “financialization” of almost all aspects of economic life as it relates to what a contracting officer must face when acquiring systems.  Private sector R&D, which mostly fell in response to economic dislocations in the past–but in a downward trend since the late 1960s overall and especially since the mid 1980s–has fallen precipitously since the bursting of the housing bubble and resultant financial crisis in 2007 with no signs of recovery.  Sequestration and other austerity measures in FY 2015 will at the same time will also negatively impact public R&D, continuing the trend overall with no offset.  This fall in R&D has a direct impact on productivity and undercuts the effectiveness of using all of the tools at hand to find existing technologies to offset the ones that require full R&D.  This appears to have caused a rise in intrinsic risk in the economy as a whole for efforts of this type, and it is this underlying risk that we see at the micro and project management level.