Go With the Flow — What is a Better Indicator: Earned Value or Cash Flow?

A lot of ink has been devoted to what constitutes “best practices” in PM but oftentimes these discussions tend to get diverted into overtly commercial activities that promote a set of products or trademarked services that in actuality are well-trod project management techniques given a fancy name or acronym.  We see this often with “road shows” and “seminars” that are blatant marketing events.  This tends to undermine the desire of PM professionals to find out what really gives us good information by both getting in the way of new synergies and by tying “best practices” to proprietary solutions.  All too often “common practice” and “proprietary limitations” pass for “best practice.”

Recently I have been involved in discussions and the formulation of guides on indicators that tell us something important regarding the condition of the project throughout its life cycle.  All too often the conversation settles on earned value with the proposition that all indicators lead back to it.  But this is an error since it is but one method for determining performance, which looks solely at one dimension of the project.

There are, after all, other obvious processes and plans that measure different dimensions of project performance.  The first such example is schedule performance.  A few years ago there was an attempt to more closely tie schedule and cost as an earned value metric, which was and is called “earned schedule.”  In particular, it had many strengths against what was posited as its alternative–schedule variance as calculated by earned value.  But both are a misnomer, even when earned schedule is offered as an alternative to earned value while at the same time adhering to its methods.  Neither measures schedule, that is, time-based performance against a plan consisting of activities.  The two artifacts can never be reconciled and reduced to one metric because they measure different things.  The statistical measure that would result would have no basis in reality, adding an unnecessary statistical layer that obfuscates instead of clarifying the underlying condition. So what do we look at you may ask?  Well–the schedule.  The schedule itself contains many opportunities to measure its dimension in order to develop useful metrics and indicators.

For example, a number of these indicators have been in place for quite some time: Baseline Execution Index (BEI), Critical Path Length Index (CPLI), early start/late start, early finish/late finish, bow-wave analysis, hit-miss indices, etc.  These all can be found in the literature, such as here and here and here.

Typically, then, the first step toward integration is tying these different metrics and indicators of the schedule and EVM dimensions at an appropriate level through the WBS or other structures.  The juxtaposition of these differing dimensions, particularly in a grid or GANTT, gives us the ability to determine if there is a correlation between the various indicators.  We can then determine–over time–the strength and consistency of the various correlations.  Further, we can take this one step further to conclude which ones lead us to causation.  Only then do we get to “best practice.”  This hard work to get to best practice is still in its infancy.

But this is only the first step toward “integrated” performance measurement.  There are other areas of integration that are needed to give us a multidimensional view of what is happening in terms of project performance.  Risk is certainly one additional area–and a commonly heard one–but I want to take this a step further.

For among my various jobs in the past included business management within a project management organization.  This usually translated into financial management, but not traditional financial management that focuses on the needs of the enterprise.  Instead, I am referring to project financial management, which is a horse of a different color, since it is focused at the micro-programmatic level on both schedule and resource management, given that planned activities and the resources assigned to them must be funded.

Thus, having the funding in place to execute the work is the antecedent and, I would argue, the overriding factor to project success.  Outside of construction project management, where the focus on cash-flow is a truism, we see this play out in publicly funded project management through the budget hearing process.  Even when we are dealing with multiyear R&D funding the project goes through this same process.  During each review, financial risk is assessed to ensure that work is being performed and budget (program) is being executed.  Earned value will determine the variance between the financial plan and the value of the execution, but the level of funding–or cash flow–will determine what gets done during any particular period of time.  The burn rate (expenditure) is the proof that things are getting done, even if the value may be less than what is actually expended.

In public funding of projects, especially in A&D, the proper “color” of money (R&D, Operations & Maintenance, etc.) available at the right time oftentimes is a better predictor of project success than the metrics and indicators which assume that the planned budget, schedule, and resources will be provided to support the baseline.  But things change, including the appropriation and release of funds.  As a result, any “best practice” that confines itself to only one or two of the dimensions of project assessment fail to meet the definition.

In the words of Gus Grissom in The Right Stuff, “No bucks, no Buck Rogers.”

 

I’ve Got Your Number — Types of Project Measurement and Services Contracts

Glen Alleman reminds us at his blog that we measure things for a reason and that they include three general types: measures of effectiveness, measures of performance, and key performance parameters.

Understanding the difference between these types of measurement is key, I think, to defining what we mean by such terms as integrated project management and in understanding the significance of differing project and contract management approaches based on industry and contract type.

For example, project management focused on commodities, with their price volatility, emphasizes schedule and resource management. Cost performance (earned value) where it exists, is measured by time in lieu of volume- or value-based performance. I have often been engaged in testy conversations where those involved in commodity-based PM insist that they have been using Earned Value Management (EVM) for as long as the U.S.-based aerospace and defense industry (though the methodology was borne in the latter). But when one scratches the surface the approaches in the details on how value and performance is determined is markedly different–and so it should be given the different business environments in which enterprises in each of these industries operate.

So what is the difference in these measures? In borrowing from Glen’s categories, I would like to posit a simple definitional model as follows:

Measures of Excellence – are qualitative measures of achievement against the goals in the project;

Measures of Performance – are quantitative measures against a plan or baseline in execution of the project plan.

Key Performance Parameters – are the minimally acceptable thresholds of achievement in the project or effort.

As you may guess there is sometimes overlap and confusion regarding which category a particular measurement falls. This confusion has been exacerbated by efforts to define key performance indicators (KPIs) based on industry, giving the impression that measures are exclusive to a particular activity. While this is sometimes the case it is not always the case.

So when we talk of integrated project management we are not accepting that any particular method of measurement has primacy over the others, nor subsumes them. Earned Value Management (EVM) and schedule performance are clearly performance measures. Qualitative measures oftentimes measure achievement of technical aspects of the end item application being produced. This is not the same as technical performance measurement (TPM), which measures technical achievement against a plan–a performance measure. Technical achievement may inform our performance measurement systems–and it is best if it does. It may also inform our Key Performance Parameters since exceeding a minimally acceptable threshold obviously helps us to determine success or failure in the end. The difference is the method of measurement. In a truly integrated system the measurement of one element informs the others. For the moment these systems presently tend to be stove-piped.

It becomes clear, then, that the variation in approaches differs by industry, as in the example on EVM above, and–in an example that I have seen most recently–by contract type. This insight is particularly important because all too often EVM is viewed as being synonymous with performance measurement, which it is not. Services contracts require structure in measurement as much as R&D-focused production contracts, particularly because they increasingly take up a large part of an enterprise’s resources. But EVM may not be appropriate.

So for our notional example, let us say that we are responsible for managing an entity’s IT support organization. There are types of equipment (PCs, tablet computers, smartphones, etc.) that must be kept operational based on the importance of the end user. These items of hardware use firmware and software that must be updated and managed. Our contract establishes minimal operational parameters that allow us to determine if we are at least meeting the basic requirements and will not be terminated for cause. The contract also provides incentives to encourage us to exceed the minimums.

The sites we support are geographically dispersed. We have to maintain a help desk but also must have people who can come onsite and provide direct labor to setup new systems or fix existing ones–and that the sites and personnel must be supported within a particular time-frame: one hour, two hours, and within twenty-four hours, etc.

In setting up our measurement systems the standard practice is to start with the key performance parameters. Typically we will also measure response times by site and personnel level, record our help desk calls, and track qualitative aspects of the work: How helpful is the help desk? Do calls get answered at the first contract? Are our personnel friendly and courteous? What kinds of hardware and software problems do we encounter? We collect our data from a variety of one-off and specialized sources and then we generate reports from these systems. Many times we will focus on those that will allow us to determine if the incentive will be paid.

Among all of this data we may be able to discern certain things: if the contract is costing more or less than we anticipated, if we are fulfilling our contractual obligations, if our personnel pools are growing or shrinking, if we are good at what we do on a day-to-day basis, and if it looks as if our margin will be met. But what these systems do not do is allow us to operate the organization as a project, nor do they allow us to make adjustments in a timely manner.

Only through integration and aggregation can we know, for example, how the demand for certain services is affecting our resource demands by geographical location and level of service, on a real=time basis where we need to make adjustments in personnel and training, whether we are losing or achieving our margin by location, labor type, equipment type, hardware vs. software; our balance sheets (by location, by equipment type, by software type, etc.), if there is a learning curve, and whether we can make intermediate adjustments to achieve the incentive thresholds before the result is written in stone. Having this information also allows us to manage expectations, factually inform perceptions, and improve customer relations.

What is clear by this example is that “not doing EVM” does not make measurement easy, nor does it imply simplification, nor the absence of measurement. Instead, understanding the nature of the work allows us to identify those measures within their proper category that need to be applied by contract type and/or industry. So while EVM may not apply to services contracts, we know that certain new aggregations do apply.

For many years we have intuitively known that construction and maintenance efforts are more schedule-focused, that oil and gas exploration more resource- and risk-focused, and that aircraft, satellites, and ships more performance-focused. I would posit that now is the time for us to quantify and formalize the commonalities and differences. This also makes an integrated approach not simply a “nice to have” capability, but an essential capability in managing our enterprises and the projects within them.

Note: This post was updated to correct grammatical errors.