Rear View Mirror — Correcting a Project Management Fallacy

“The past is never dead. It’s not even past.” —  William Faulkner, Requiem for a Nun

Over the years I and others have briefed project managers on project performance using KPPs, earned value management, schedule analysis, business analytics, and what we now call predictive analytics. Oftentimes, some set of figures will be critiqued as being ineffective or unhelpful; that the analytics “only look in the rear view mirror” and that they “tell me what I already know.”

In approaching this critique, it is useful to understand Faulkner’s oft-cited quote above.  When we walk down a street, let us say it is a busy city street in any community of good size, we are walking in the past.  The moment we experience something it is in the past.  If we note the present condition of our city street we will see that for every building, park, sidewalk, and individual that we pass on that sidewalk, each has a history.  These structures and the people are as much driven by their pasts as their expectations for the future.

Now let us take a snapshot of our street.  In doing so we can determine population density, ethnic demographics, property values, crime rate, and numerous other indices and parameters regarding what is there.  No doubt, if we stop here we are just “looking in the rear view mirror” and noting what we may or may not know, however certain our anecdotal filter.

Now, let us say that we have an affinity for this street and may want to live there.  We will take the present indices and parameters that noted above, which describe our geographical environment, and trend it.  We may find that housing pricing are rising or falling, that crime is rising or falling, etc.  If we delve into the street’s ownership history we may find that one individual or family possesses more than one structure, or that there is a great deal of diversity.  We may find that a Superfund site is not too far away.  We may find that economic demographics are pointing to stagnation of the local economy, or that the neighborhood is becoming gentrified.  Just by time-phasing and delving into history–by mapping out the trends and noting the significant historical background–provides us with enough information to inform us about whether our affinity is grounded in reality or practicality.

But let us say that, despite negatives, we feel that this is the next up-and-coming neighborhood.  We would need signs to make that determination.  For example, what kinds of businesses have moved into the neighborhood and what is their number?  What demographic do they target?  There are many other questions that can be asked to see if our economic analysis is valid–and that analysis would need to be informed by risk.

The fact of the matter is that we are always living with the past: the cumulative effect of the past actions of numerous individuals, including our own, and organizations, groups of individuals, and institutions; not to mention larger economic forces well beyond our control.  Any desired change in the trajectory of the system being evaluated must identify those elements that can be impacted or influenced, and an analysis of the effort that must be expended to bring about the change, is also essential.

This is a scientific fact, proven countless times by physics, biology, and other disciplines.  A deterministic universe, which provides for some uncertainty at any given point at our level of existence, drives the possible within very small limits of possibility and even smaller limits of probability.  What this means in plain language is that the future is usually a function of the past.

Any one number or index, no doubt, does not necessarily tell us something important.  But it could if it is relevant, material, and prompts further inquiry essential to project performance.

For example, let us look at an integrated master schedule that underlies a typical medium-sized project.

 

We will select a couple of metrics that indicates project schedule performance.  In the case below we are looking at task hits and misses and Baseline Execution Index, a popular index that determines efficiency in meeting baseline schedule planning.

Note that the chart above plots the performance over time.  What will it take to improve our efficiency?  So as a quick logic check on realism, let’s take a look at the work to date with all of the late starts and finishes.

Our bow waves track the cumulative effort to date.  As we work to clear missed starts or missed finishes in a project we also must devote resources to the accomplishment of current work that is still in line with the baseline.  What this means is that additional resources may need to be devoted to particular areas of work accomplishment or risk handling.

This is not, of course, the limit to our analysis that should be undertaken.  The point here is that at every point in history in every system we stand at a point of the cumulative efforts, risk, failure, success, and actions of everyone who came before us.  At the microeconomic level this is also true within our project management systems.  There are also external constraints and influences that will define the framing assumptions and range of possibilities and probabilities involved in project outcomes.

The shear magnitude of the bow waves that we face in all endeavors will often be too great to fully overcome.  As an analogy, a bow wave in complex systems is more akin to a tsunami as opposed to the tidal waves that crash along our shores.  All of the force of all of the collective actions that have preceded present time will drive our trajectory.

This is known as inertia.

Identifying and understanding the contributors to the inertia that is driving our performance is important to knowing what to do.  Thus, looking in the rear view mirror is important and not a valid argument for ignoring an inconvenient metric that may only require additional context.  Furthermore, knowing where we sit is important and not insignificant.  Knowing the factors that put us where we are–and the effort that it will take to influence our destiny–will guide what is possible and not possible in our future actions.

Note:  All charted data is notional and is not from an actual project.

Ace of Base(line) — A New Paper on Building a Credible PMB

Glen Alleman, a leading consultant in program management (who also has a blog that I follow), Tom Coonce of the Institute for Defense Analyses, and Rick Price of Lockheed Martin, have jointly published a new paper in the College of Performance Management’s Measureable News entitled “Building A Credible Performance Measurement Baseline.”

The elements of their proposal for constructing a credible PMB, from my initial reading, are as follows:

1.  Rather than a statement of requirements, decision-makers should first conduct a capabilities gap analysis to determine the units of effectiveness and performance.  This ensures that program management decision-makers have a good idea of what “done” looks like, and ensures that performance measurements aren’t disconnected from these essential elements of programmatic success.

2.  Following from item 1 above, the technical plan and the programmatic plan should always be in sync.

3.  Earned value management is but one of many methods for assessing programmatic performance in its present state.  At least that is how I interpret what the are saying, because later in their paper they propose a way to ensure that EVM does not stray from the elements that define technical achievement.  But EVM in itself is not the end-all or be-all of performance management–and fails in many ways to anticipate where the technical and programmatic plans diverge.

4.  All work in achieving the elements of effectiveness and performance are first constructed and given structure in the WBS.  Thus, the WBS ties together all elements of the project plan.  In addition, technical and programmatic risk must be assessed at this stage, rather than further down the line after the IMS has been constructed.

5.  The Integrated Master Plan (IMP) is constructed to incorporate the high level work plans that are manifested through major programmatic events and milestones.  It is through the IMP that EVM is then connected to technical performance measures that affect the assessment of work package completion that will be reflected in the detailed Integrated Master Schedule (IMS).  This establishes not only the importance of the IMP in ensuring the linkage of technical and programmatic plans, but also makes the IMP an essential artifact that has all too often be seen as optional, which probably explains why so many project managers are “surprised” when they construct aircraft that can’t land on the deck of a carrier or satellites that can’t communicate in orbit, though they are well within the tolerance bands of cost and schedule variances.

6.  Construct the IMS taking into account the technical, qualitative, and quantitative risks associated with the events and milestones identified in the IMP.  Construct risk mitigation/handling where possible and set aside both cost and schedule margins for irreducible uncertainties, and management reserve (MR) for reducible risks, keeping in mind that margin is within the PMB but MR is above the PMB but within the CBB.  Furthermore, schedule margin should be transitioned from a deterministic one to a probabilistic one–constructing sufficient margin to protect essential activities.  Cost margin in work packages should also be constructed in the same manner-based on probabilistic models that determine the chances of making a risk reducible until reaching the point of irreducibility.  Once again, all of these elements tie back to the WBS.

7.  Cost and schedule margin are not the same as slack or float.  Margin is reserve.  Slack or float is equivalent to overruns and underruns.  The issue here in practice is going to be to get the oversight agencies to leave margin alone.  All too often this is viewed as “free” money to be harvested.

8.  Cost, schedule, and technical performance measurement, tied together at the elemental level of work–informing each other as a cohesive set of indicators that are interrelated–and tied back to the WBS, is the only valid method of ensuring accurate project performance measurement and the basis for programmatic success.

Most interestingly, in conclusion the authors present as a simplified case an historical example how their method proves itself out as both a common sense and completely reasonable approach, by using the Wright brothers’ proof of concept for the U.S. Army in 1908.  The historical documents in that case show that the Army had constructed elements of effectiveness and performance in determining whether they would purchase an airplane from brothers.  All measures of project success and failure would be assessed against those elements–which combined cost, schedule, and technical achievement.  I was particularly intrigued that the issue of weight of the aircraft was part of the assessment–a common point of argument from critics of the use of technical performance–where it is demonstrated in the paper how the Wright brothers actually assessed and mitigated the risk associated with that measure of performance over time.

My initial impression of the paper is that it is a significant step forward in bringing together all of the practical lessons learned from both the successes and failures of project performance.  Their recommendations are a welcome panacea to many of the deficiencies implicit in our project management systems and procedures.

I also believe that as an integral part of the process in construction of the project artifacts, that it is a superior approach than the one that I initially proposed in 1997, which assumed that TPM would always be applied as an additional process that would inform cost and schedule at the end of each assessment period.  I look forward to hearing the presentation at the next Integrated Program Management Conference, at which I will attempt some live blogging.

I’ve Got Your Number — Types of Project Measurement and Services Contracts

Glen Alleman reminds us at his blog that we measure things for a reason and that they include three general types: measures of effectiveness, measures of performance, and key performance parameters.

Understanding the difference between these types of measurement is key, I think, to defining what we mean by such terms as integrated project management and in understanding the significance of differing project and contract management approaches based on industry and contract type.

For example, project management focused on commodities, with their price volatility, emphasizes schedule and resource management. Cost performance (earned value) where it exists, is measured by time in lieu of volume- or value-based performance. I have often been engaged in testy conversations where those involved in commodity-based PM insist that they have been using Earned Value Management (EVM) for as long as the U.S.-based aerospace and defense industry (though the methodology was borne in the latter). But when one scratches the surface the approaches in the details on how value and performance is determined is markedly different–and so it should be given the different business environments in which enterprises in each of these industries operate.

So what is the difference in these measures? In borrowing from Glen’s categories, I would like to posit a simple definitional model as follows:

Measures of Excellence – are qualitative measures of achievement against the goals in the project;

Measures of Performance – are quantitative measures against a plan or baseline in execution of the project plan.

Key Performance Parameters – are the minimally acceptable thresholds of achievement in the project or effort.

As you may guess there is sometimes overlap and confusion regarding which category a particular measurement falls. This confusion has been exacerbated by efforts to define key performance indicators (KPIs) based on industry, giving the impression that measures are exclusive to a particular activity. While this is sometimes the case it is not always the case.

So when we talk of integrated project management we are not accepting that any particular method of measurement has primacy over the others, nor subsumes them. Earned Value Management (EVM) and schedule performance are clearly performance measures. Qualitative measures oftentimes measure achievement of technical aspects of the end item application being produced. This is not the same as technical performance measurement (TPM), which measures technical achievement against a plan–a performance measure. Technical achievement may inform our performance measurement systems–and it is best if it does. It may also inform our Key Performance Parameters since exceeding a minimally acceptable threshold obviously helps us to determine success or failure in the end. The difference is the method of measurement. In a truly integrated system the measurement of one element informs the others. For the moment these systems presently tend to be stove-piped.

It becomes clear, then, that the variation in approaches differs by industry, as in the example on EVM above, and–in an example that I have seen most recently–by contract type. This insight is particularly important because all too often EVM is viewed as being synonymous with performance measurement, which it is not. Services contracts require structure in measurement as much as R&D-focused production contracts, particularly because they increasingly take up a large part of an enterprise’s resources. But EVM may not be appropriate.

So for our notional example, let us say that we are responsible for managing an entity’s IT support organization. There are types of equipment (PCs, tablet computers, smartphones, etc.) that must be kept operational based on the importance of the end user. These items of hardware use firmware and software that must be updated and managed. Our contract establishes minimal operational parameters that allow us to determine if we are at least meeting the basic requirements and will not be terminated for cause. The contract also provides incentives to encourage us to exceed the minimums.

The sites we support are geographically dispersed. We have to maintain a help desk but also must have people who can come onsite and provide direct labor to setup new systems or fix existing ones–and that the sites and personnel must be supported within a particular time-frame: one hour, two hours, and within twenty-four hours, etc.

In setting up our measurement systems the standard practice is to start with the key performance parameters. Typically we will also measure response times by site and personnel level, record our help desk calls, and track qualitative aspects of the work: How helpful is the help desk? Do calls get answered at the first contract? Are our personnel friendly and courteous? What kinds of hardware and software problems do we encounter? We collect our data from a variety of one-off and specialized sources and then we generate reports from these systems. Many times we will focus on those that will allow us to determine if the incentive will be paid.

Among all of this data we may be able to discern certain things: if the contract is costing more or less than we anticipated, if we are fulfilling our contractual obligations, if our personnel pools are growing or shrinking, if we are good at what we do on a day-to-day basis, and if it looks as if our margin will be met. But what these systems do not do is allow us to operate the organization as a project, nor do they allow us to make adjustments in a timely manner.

Only through integration and aggregation can we know, for example, how the demand for certain services is affecting our resource demands by geographical location and level of service, on a real=time basis where we need to make adjustments in personnel and training, whether we are losing or achieving our margin by location, labor type, equipment type, hardware vs. software; our balance sheets (by location, by equipment type, by software type, etc.), if there is a learning curve, and whether we can make intermediate adjustments to achieve the incentive thresholds before the result is written in stone. Having this information also allows us to manage expectations, factually inform perceptions, and improve customer relations.

What is clear by this example is that “not doing EVM” does not make measurement easy, nor does it imply simplification, nor the absence of measurement. Instead, understanding the nature of the work allows us to identify those measures within their proper category that need to be applied by contract type and/or industry. So while EVM may not apply to services contracts, we know that certain new aggregations do apply.

For many years we have intuitively known that construction and maintenance efforts are more schedule-focused, that oil and gas exploration more resource- and risk-focused, and that aircraft, satellites, and ships more performance-focused. I would posit that now is the time for us to quantify and formalize the commonalities and differences. This also makes an integrated approach not simply a “nice to have” capability, but an essential capability in managing our enterprises and the projects within them.

Note: This post was updated to correct grammatical errors.