Go With the Flow — What is a Better Indicator: Earned Value or Cash Flow?

A lot of ink has been devoted to what constitutes “best practices” in PM but oftentimes these discussions tend to get diverted into overtly commercial activities that promote a set of products or trademarked services that in actuality are well-trod project management techniques given a fancy name or acronym.  We see this often with “road shows” and “seminars” that are blatant marketing events.  This tends to undermine the desire of PM professionals to find out what really gives us good information by both getting in the way of new synergies and by tying “best practices” to proprietary solutions.  All too often “common practice” and “proprietary limitations” pass for “best practice.”

Recently I have been involved in discussions and the formulation of guides on indicators that tell us something important regarding the condition of the project throughout its life cycle.  All too often the conversation settles on earned value with the proposition that all indicators lead back to it.  But this is an error since it is but one method for determining performance, which looks solely at one dimension of the project.

There are, after all, other obvious processes and plans that measure different dimensions of project performance.  The first such example is schedule performance.  A few years ago there was an attempt to more closely tie schedule and cost as an earned value metric, which was and is called “earned schedule.”  In particular, it had many strengths against what was posited as its alternative–schedule variance as calculated by earned value.  But both are a misnomer, even when earned schedule is offered as an alternative to earned value while at the same time adhering to its methods.  Neither measures schedule, that is, time-based performance against a plan consisting of activities.  The two artifacts can never be reconciled and reduced to one metric because they measure different things.  The statistical measure that would result would have no basis in reality, adding an unnecessary statistical layer that obfuscates instead of clarifying the underlying condition. So what do we look at you may ask?  Well–the schedule.  The schedule itself contains many opportunities to measure its dimension in order to develop useful metrics and indicators.

For example, a number of these indicators have been in place for quite some time: Baseline Execution Index (BEI), Critical Path Length Index (CPLI), early start/late start, early finish/late finish, bow-wave analysis, hit-miss indices, etc.  These all can be found in the literature, such as here and here and here.

Typically, then, the first step toward integration is tying these different metrics and indicators of the schedule and EVM dimensions at an appropriate level through the WBS or other structures.  The juxtaposition of these differing dimensions, particularly in a grid or GANTT, gives us the ability to determine if there is a correlation between the various indicators.  We can then determine–over time–the strength and consistency of the various correlations.  Further, we can take this one step further to conclude which ones lead us to causation.  Only then do we get to “best practice.”  This hard work to get to best practice is still in its infancy.

But this is only the first step toward “integrated” performance measurement.  There are other areas of integration that are needed to give us a multidimensional view of what is happening in terms of project performance.  Risk is certainly one additional area–and a commonly heard one–but I want to take this a step further.

For among my various jobs in the past included business management within a project management organization.  This usually translated into financial management, but not traditional financial management that focuses on the needs of the enterprise.  Instead, I am referring to project financial management, which is a horse of a different color, since it is focused at the micro-programmatic level on both schedule and resource management, given that planned activities and the resources assigned to them must be funded.

Thus, having the funding in place to execute the work is the antecedent and, I would argue, the overriding factor to project success.  Outside of construction project management, where the focus on cash-flow is a truism, we see this play out in publicly funded project management through the budget hearing process.  Even when we are dealing with multiyear R&D funding the project goes through this same process.  During each review, financial risk is assessed to ensure that work is being performed and budget (program) is being executed.  Earned value will determine the variance between the financial plan and the value of the execution, but the level of funding–or cash flow–will determine what gets done during any particular period of time.  The burn rate (expenditure) is the proof that things are getting done, even if the value may be less than what is actually expended.

In public funding of projects, especially in A&D, the proper “color” of money (R&D, Operations & Maintenance, etc.) available at the right time oftentimes is a better predictor of project success than the metrics and indicators which assume that the planned budget, schedule, and resources will be provided to support the baseline.  But things change, including the appropriation and release of funds.  As a result, any “best practice” that confines itself to only one or two of the dimensions of project assessment fail to meet the definition.

In the words of Gus Grissom in The Right Stuff, “No bucks, no Buck Rogers.”

 

Mo’Better Risk — Tournaments and Games of Failure Part II

My last post discussed economic tournaments and games of failure in how they describe the success and failure of companies, with a comic example for IT start-up companies.  Glen Alleman at his Herding Cats blog has a more serious response in handily rebutting those who believe that #NoEstimates, Lean, Agile, and other cult-like fads can overcome the bottom line, that is, apply a method to reduce inherent risk and drive success.  As Glen writes:

“It’s about the money. It’s always about the money. Many want it to be about them or their colleagues, or the work environment, or the learning opportunities, or the self actualization.” — Glen Alleman, Herding Cats

Perfectly good products and companies fail all the time.  Oftentimes the best products fail to win the market, or do so only fleetingly.  Just think of the roles of the dead (or walking dead) over the years:  Novell, WordPerfect, Visicalc, Harvard Graphics; the list can go on and on.  Thus, one point that I would deviate from Glen is that it is not always EBITDA.  If that were true then both Facebook and Amazon would not be around today.  We see tremendous payouts to companies with promising technologies acquired for outrageous sums of money, though they have yet to make a profit.  But for every one of these there are many others that see the light of day for a moment and then flicker out of existence

So what is going on and how does this inform our knowledge of project management?  For the measure of our success is time and money, in most cases.  Obviously not all cases.  I’ve given two cases of success that appeared to be failure in previous posts to this blog: the M1A1 Tank and the ACA.  The reason why these “failures” were misdiagnosed was that the agreed measure(s) of success were incorrect.  Knowing this difference, where, and how it applies is important.

So how do tournaments and games of failure play a role in project management?  I submit that the lesson learned from these observations is that we see certain types of behaviors that are encouraged that tend to “bake” certain risks into our projects.  In high tech we know that there will be a thousand failures for every success, but it is important to keep the players playing–at least it is in the interest of the acquiring organization to do so, and is in the public interest in many cases as well.  We also know that most IT projects by most measures–both contracted out and organic–tend to realize a high rate of failure.  But if you win an important contract or secure an important project, the rewards can be significant.

The behaviors that are reinforced in this scenario on the part of the competing organization is to underestimate the cost and time involved in the effort; that is, so-called “bid to win.”  On the acquiring organization’s part, contracting officers lately have been all too happy to award contracts they know to be too low (and normally out of the competitive range) even though they realize it to be significantly below the independent estimate.  Thus “buying in” provides a significant risk that is hard to overcome.

Other behaviors that we see given the project ecosystem are the bias toward optimism and requirements instability.

In the first case, bias toward optimism, we often hear project and program managers dismiss bad news because it is “looking in the rear view mirror.”  We are “exploring,” we are told, and so the end state will not be dictated by history.  We often hear a version of this meme in cases where those in power wish to avoid accountability.  “Mistakes were made” and “we are focused on the future” are attempts to change the subject and avoid the reckoning that will come.  In most cases, however, particularly in project management, the motivations are not dishonest but, instead, sociological and psychological.  People who tend to build things–engineers in general, software coders, designers, etc.–tend to be an optimistic lot.  In very few cases will you find one of them who will refuse to take on a challenge.  How many cases have we presented a challenge to someone with these traits and heard the refrain:  “I can do that.”?  This form of self-delusion can be both an asset and a risk.  Who but an optimist would take on any technically challenging project?  But this is also the trait that will keep people working to the bitter end in a failure that places the entire enterprise at risk.

I have already spent some bits in previous posts regarding the instability of requirements, but this is part and parcel of the traits that we see within this framework.  Our end users determine that given how things are going we really need additional functionality, features, or improvements prior to the product roll out.  Our technical personnel will determine that for “just a bit more effort” they can achieve a higher level of performance or add capabilities at marginal or tradeoff cost.  In many cases, given the realization that the acquisition was a buy-in, project and program managers allow great latitude in accepting as a change an item that was assumed to be in the original scope.

There is a point where one or more of these factors is “baked in” into the course that the project will take.  We can delude ourselves into believing that we can change the course of the trajectory of the system through the application of methods: Agile, Lean, Six Sigma, PMBOK, etc. but, in the end, if we exhaust our resources without a road map on how to do this we will fail.  Our systems must be powerful and discrete enough to note the trend that is “baked in” due to factors in the structure and architecture of the effort being undertaken.  This is the core risk that must be managed in any undertaking.  A good example that applies to a complex topic like Global Warming was recently illustrated by Neil deGrasse Tyson in the series Cosmos:

In this example Dr. Tyson is climate and the dog is the weather.  But in our own analogy Dr. Tyson can be the trajectory of the system with the dog representing the “noise” of periodic indicators and activity around the effort.  We often spend a lot of time and effort (which I would argue is largely unproductive) on influencing these transient conditions in simpler systems rather than on the core inertia of the system itself.  That is where the risk lies. Thus, not all indicators are the same.  Some are measuring transient anomalies that have nothing to do with changing the core direction of the system, others are more valuable.  These latter indicators are the ones that we need to cultivate and develop, and they reside in an initial measurement of the inherent risk of the system largely based on its architecture that is antecedent to the start of the work.

This is not to say that we can do nothing about the trajectory.  A simpler system can be influenced more easily.  We cannot recover the effort already expended–which is why even historical indicators are important.  It is because they inform our future expectations and, if we pay attention to them, they keep us grounded in reality.  Even in the case of Global Warming we can change, though gradually, what will be a disastrous result if we allow things to continue on their present course.  In a deterministic universe we can influence the outcomes based on the contingent probabilities presented to us over time.  Thus, we will know if we have handled the core risk of the system by focusing on these better indicators as the effort progresses.  This will affect its trajectory.

Of course, a more direct way of modifying these risks is to make systemic adjustments.  Do we really need a tournament-based system as it exists and is the waste inherent in accepting so much failure really necessary?  What would that alternative look like?

Mo’Better Risk — Tournaments and Games of Failure Part II

My last post discussed economic tournaments and games of failure in how they describe the success and failure of companies, with an comic example for IT start-up companies.  Glen Alleman at his Herding Cats blog has a more serious response in handily rebutting those who believe that #NoEstimates, Lean, Agile, and other cult-like fads can overcome the bottom line, that is, apply a method to reduce inherent risk and drive success.  As Glen writes:

“It’s about the money. It’s always about the money. Many want it to be about them or their colleagues, or the work environment, or the learning opportunities, or the self actualization.” — Glen Alleman, Herding Cats

Perfectly good products and companies fail all the time.  Oftentimes the best products fail to win the market, or do so only fleetingly.  Just think of the roles of the dead (or walking dead) over the years:  Novell, WordPerfect, Visicalc, Harvard Graphics; the list can go on and on.  Thus, one point that I would deviate from Glen is that it is not always EBITDA.  If that were true then both Facebook and Amazon would not be around today.  We see tremendous payouts to companies with promising technologies acquired for outrageous sums of money, though they have yet to make a profit.  But for every one of these there are many others that see the light of day for a moment and then flicker out of existence

So what is going on and how does this inform our knowledge of project management?  For the measure of our success is time and money, in most cases.  Obviously not all cases.  I’ve given two cases of success that appeared to be failure in previous posts to this blog: the M1A1 Tank and the ACA.  The reason why these “failures” were misdiagnosed was that the agreed measure(s) of success were incorrect.  Knowing this difference, where, and how it applies is important.

So how do tournaments and games of failure play a role in project management?  I submit that the lesson learned from these observations is that we see certain types of behaviors that are encouraged that tend to “bake” certain risks into our projects.  In high tech we know that there will be a thousand failures for every success, but it is important to keep the players playing–at least it is in the interest of the acquiring organization to do so, and is in the public interest in many cases as well.  We also know that most IT projects by most measures–both contracted out and organic–tend to realize a high rate of failure.  But if you win an important contract or secure an important project, the rewards can be significant.

The behaviors that are reinforced in this scenario on the part of the competing organization is to underestimate the cost and time involved in the effort; that is, so-called “bid to win.”  On the acquiring organization’s part, contracting officers lately have been all too happy to award contracts they know to be too low (and normally out of the competitive range) even though they realize it to be significantly below the independent estimate.  Thus “buying in” provides a significant risk that is hard to overcome.

Other behaviors that we see given the project ecosystem are the bias toward optimism and requirements instability.

In the first case, bias toward optimism, we often hear project and program managers dismiss bad news because it is “looking in the rear view mirror.”  We are “exploring,” we are told, and so the end state will not be dictated by history.  We often hear a version of this meme in cases where those in power wish to avoid accountability.  “Mistakes were made” and “we are focused on the future” are attempts to change the subject and avoid the reckoning that will come.  In most cases, however, particularly in project management, the motivations are not dishonest but, instead, sociological and psychological.  People who tend to build things–engineers in general, software coders, designers, etc.–tend to be an optimistic lot.  In very few cases will you find one of them who will refuse to take on a challenge.  How many cases have we presented a challenge to someone with these traits and heard the refrain:  “I can do that.”?  This form of self-delusion can be both an asset and a risk.  Who but an optimist would take on any technically challenging project?  But this is also the trait that will keep people working to the bitter end in a failure that places the entire enterprise at risk.

I have already spent some bits in previous posts regarding the instability of requirements, but this is part and parcel of the traits that we see within this framework.  Our end users determine that given how things are going we really need additional functionality, features, or improvements prior to the product roll out.  Our technical personnel will determine that for “just a bit more effort” they can achieve a higher level of performance or add capabilities at marginal or tradeoff cost.  In many cases, given the realization that the acquisition was a buy-in, project and program managers allow great latitude in accepting as a change an item that was assumed to be in the original scope.

There is a point where one or more of these factors is “baked in” into the course that the project will take.  We can delude ourselves into believing that we can change the course of the trajectory of the system through the application of methods: Agile, Lean, Six Sigma, PMBOK, etc. but, in the end, if we exhaust our resources without a road map on how to do this we will fail.  Our systems must be powerful and discrete enough to note the trend that is “baked in” due to factors in the structure and architecture of the effort being undertaken.  This is the core risk that must be managed in any undertaking.  A good example that applies to a complex topic like Global Warming was recently illustrated by Neil deGrasse Tyson in the series Cosmos:

In this example Dr. Tyson is climate and the dog is the weather.  But in our own analogy Dr. Tyson can be the trajectory of the system with the dog representing the “noise” of periodic indicators and activity around the effort.  We often spend a lot of time and effort (which I would argue is largely unproductive) on influencing these transient conditions in simpler systems rather than on the core inertia of the system itself.  That is where the risk lies. Thus, not all indicators are the same.  Some are measuring transient anomalies that have nothing to do with changing the core direction of the system, others are more valuable.  These latter indicators are the ones that we need to cultivate and develop, and they reside in an initial measurement of the inherent risk of the system largely based on its architecture that is antecedent to the start of the work.

This is not to say that we can do nothing about the trajectory.  A simpler system can be influenced more easily.  We cannot recover the effort already expended–which is why even historical indicators are important.  It is because they inform our future expectations and, if we pay attention to them, they keep us grounded in reality.  Even in the case of Global Warming we can change, though gradually, what will be a disastrous result if we allow things to continue on their present course.  In a deterministic universe we can influence the outcomes based on the contingent probabilities presented to us over time.  Thus, we will know if we have handled the core risk of the system by focusing on these better indicators as the effort progresses.  This will affect its trajectory.

Of course, a more direct way of modifying these risks is to make systemic adjustments.  Do we really need a tournament-based system as it exists and is the waste inherent in accepting so much failure really necessary?  What would that alternative look like?

Don’t Be Cruel — Contract Withholds and the Failure of Digital Systems

Recent news over at Breaking Defense headlined a $25.7M withhold to Pratt & Whitney for the F135 engine.  This is the engine for the F35 Lightning II aircraft, also known as the Joint Strike Fighter (JSF).  The reason for the withhold in this particular case was for an insufficient cost and schedule business system that the company has in place to support project management.

The enforcement of withholds for deficiencies in business systems was instituted in August 2011.  These business systems include six areas:

  • Accounting
  • Estimating
  • Purchasing
  • EVMS (Earned Value Management System)
  • MMAS (Material Management and Accounting System), and
  • Government Property

As of November 30, 2013, $19 million had been held back from BAE Systems Plc, $5.2 million from Boeing Co., and $1.4 million Northrop Corporation.  These were on the heels of a massive $221 million held back from Lockheed Martin’s aeronautics unit for its deficient earned value management system.  In total, fourteen companies were impacted by withholds last year.

For those unfamiliar with the issue, these withholds may seem to be a reasonable enforcement mechanism that sufficient business systems are in place in order to ensure that there is traceability in the expenditure of government funds by contractors.  After all, given the disastrous state of affairs where there was massive loss of accountability by contractors in Iraq and Afghanistan, many senior personnel in DoD felt that there needed to be teeth given to contracting officer, and what better way to do this than through financial withholds?  The rationale is that if the systems are not adequate then the information originated from these systems is not credible.

This is probably a good approach for the acquisition of wartime goods and services, but doesn’t seem to fit the reality of the project management environment in which government contracting operates.  The strongest objections to the rule, I think, came from the legal community, most notably from the Bar Association’s Section of Public Contract Law.  Among these was that the amount of the withhold is based on an arbitrary percentage within the DFARS rule.  Another point made is that the defects in the systems in most cases are disconnected from actual performance and so redirect attention and resources away from the contractual obligation at hand.

These objections were made prior to the rule’s acceptance.  But now that the rule is being enforced the more important question is the effect of the withholds on project management.  My own anecdotal experience from having been a business manager in a program management staff is that the key to project success is oftentimes determined by cash flow.  While internal factors to the project, such as the effective construction of the integrated master schedule (IMS), performance management baseline (PMB), risk identification and handling, and performance tracking against these plans are the primary focus of project integrity, all too often the underlying financial constraints in which the project must operate is treated as a contingent factor.  If our capabilities due to financial constraints are severe, then the best plan in the world will not achieve the desired results because it fails to be realistic.

The principles that apply to any entrepreneurial enterprise also apply to complex projects.  It is true that large companies do have significant cash reserves and that these reserves have grown significantly since the 2007-2010 depression.  But a major program requires a large infusion of resources that constitutes a significant barrier to entry, and so such reserves contribute to the financial stability necessary to undertake such efforts.  Profit is not realized on every project.  This may sound surprising to those unfamiliar with public administration, but this is the case because it sometimes is worth breaking even or taking a slight loss so as not to lose essential organizational knowledge.  It takes years to develop an engineer who understands the interrelationships of the key factors in a jet fighter: the tradeoffs between speed, maneuverability, weight, and stress from the operational environment, like taking off from and landing on a large metal aircraft carrier that travels on salt water.  This cannot be taught in college nor can it be replaced if the knowledge is lost due to budget cuts, pay freezes, and sequestration.  Oftentimes, because of their size and complexity, project start-up costs much be financed using short term loans, adding risk when payments are delayed and work interrupted.  The withhold rule adds an additional, if not unnecessary, dimension of risk to project success.

Given that most of the artifacts that are deemed necessary to handle and reduce risk are done in a collaborative environment by the contractor-government project team through the Integrated Baseline Review (IBR) process and system validation–as well as pre-award certifications–it seems that there is no clear line of demarcation to place the onus of inadequate business systems on the contractor.  The reality of the situation, given cost-plus contracts for development contracts, is that industry is, in fact, a private extension of the defense infrastructure.

It is true that a line must be drawn in the contractual relationship to provide those necessary checks and balances to guard against fraud, waste, or a race to the lowest common denominator that undermines accountability and execution of the contractual obligation.  But this does not mean that the work of oversight requires another post-hoc layer of surveillance.  If we are not getting quality results from pre-award and post-award processes, then we must identify which ones are failing us and reform them.  Interrupting cash flow for inadequately assessed business systems may simply be counter-productive.

As Deming would argue, quality must be built into the process.  What defines quality must also be consistent.  That our systems are failing in that regard is indicative, I believe, in a failure of imagination on the part of our digital systems, on which most business systems rely.  It was fine in the first wave of microcomputer digitization in the 1980s and 1990s to simply design systems that mimicked the structure of pre-digital information specialization.  Cost performance systems were built to serve the needs of cost analysts, scheduling systems were designed for schedulers, risk systems for a sub-culture of risk specialists, and so on.

To break these stovepipes the response of the IT industry twofold, which constitutes the second wave of digitization of project management business processes.

One was in many ways a step back.  The mainframe culture in IT had been on the defensive since the introduction of the PC and “distributed” processing.  Here was an opening to reclaim the high ground and so expensive, hard-coded ERP, PPM, and BI systems were introduced.  The lack of security in deploying systems quickly in the first wave also provided the community with a convenient stalking horse, though even the “new” systems, as we have seen, lack adequate security in the digital arms race.  The ERP and BI systems are expensive and require specialized knowledge and expertise.  Solutions are hard-coded, require detailed modeling, and take a significant amount of time to deploy, supporting a new generation of coders.  The significant financial and personnel resources required to acquire and implement these systems–and the management reputation on the line for making the decision to acquire the systems in the first place–have become a rationale for their continued use, even when they fail at the same high rate of all IT development projects.  Thus, tradeoff analysis between sunk costs and prospective costs is rarely made in determining their sustainability.

Another response was to knit together the first wave, specialized systems in “best-of-breed” configurations.  In this case data is imported and reconciled between specialized systems to achieve integration needed to service the cross-functional nature of project management.  Oftentimes the estimating, IMS, PMB, and qualitative and quantitative risk artifacts are constructed by separate specialists with little or no coordination or fidelity.  These environments are characterized by workarounds, special resource-heavy reconciliation teams dedicated to verifying data between systems, the expenditure of resources in fixing errors after the fact, and in the development of Access and MS Excel-heavy one-off solutions designed to address deficiencies in the underlying systems.

That the deficiencies that are found in the solutions described above are mimicked in the findings of the deficiencies in business systems marks the culprits largely being the underlying information systems.  The solution, I think, is going to come from those portions of the digital community where the barriers to entry are low.  The current crop of software in place is reaching the end of its productive life from the first and second waves.  Hoping to protect market share and stave off the inevitable, new delivery and business models are being deployed by entrenched software companies, who have little incentive to drive the industry to the next phase.  Instead, they have been marketing SaaS and cloud computing as the panacea, though the nature of the work tends to militate against acceptance of external hosting.  In the end, I believe the answer is to leverage new technologies that eliminate the specialized and hard-coded nature of the first example, but achieve integration, while leveraging the existing historical data that exists in great abundance from the second example.

Note: The title and some portions of this post were modified from the original for clarity.