As many of my colleagues in project management know, I wrote a series of articles on the application of technical performance risk in project management back in 1997, one of which made me an award recipient from the institution now known as Defense Acquisition University. Over the years various researchers and project organizations have asked me if I have any additional thoughts on the subject and the response up until now has been: no. From a practical standpoint, other responsibilities took me away from the domain of determining the best way of recording technical achievement in complex projects. Furthermore, I felt that the field was not ripe for further development until there were mathematics and statistical methods that could better approach the behavior of complex adaptive systems.
But now, after almost 20 years, there is an issue that has been nagging at me since publication of the results of the project studies that I led from 1995 through 1997. It is this: the complaint by project managers in resisting the application of measuring technical achievement of any kind, and integrating it with cost performance, the best that anyone can do is 100%. “All TPM can do is make my performance look worse”, was the complaint. One would think this observation would not only not face opposition, especially from such an engineering dependent industry, but also because, at least in this universe, the best you can do is 100%.* But, of course, we weren’t talking about the same thing and I have heard this refrain again at recent conferences and meetings.
To be honest, in our recommended solution in 1997, we did not take things as far as we could have. It was always intended to be the first but not the last word regarding this issue. And there have been some interesting things published about this issue recently, which I noted in this post.
In the discipline of project management in general, and among earned value practitioners in particular, the performance being measured oftentimes exceeds 100%. But there is the difference. What is being measured as exceeding 100% is progress against both a time-based and fiscally-based linear plan. Most of the physical world doesn’t act nor can it be measured this way. When measuring the attributes of a system or component against a set of physical or performance thresholds, linearity against a human-imposed plan oftentimes goes out the window.
But a linear progression can be imposed on the development toward the technical specification. So then the next question is how do we measure progress during the development curve and duration.
The short answer, without repeating a summarization of the research (which is linked above) is through risk assessment, and the method that we used back in 1997 was a distribution curve that determined the probability of reaching the next step in the technical development. This was based on well-proven systems engineering techniques that had been used in industry for many years, particularly at pre-Lockheed Martin Martin Marietta. Technical risk assessment, even using simplistic 0-50-80-100 curves, provides a good approximation of probability and risk between each increment of development, though now there are more robust models. For example, the use of Bayesian methodology, which introduces mathematical rigor into statistics, as outlined in this post by Eliezer Yudkowsky. (As an aside, I strongly recommend his blogs for anyone interested in the cutting edge of rational inquiry and AI).
So technical measurement is pretty well proven. But the issue that then presents itself (and presented itself in 1997) was how to derive value from technical performance. Value is a horse of a different color. The two bugaboos that were presented as being impassible roadblocks were weight and test failure.
Let’s take weight first. On one of my recent trips I found myself seated in an Embraer E-jet. These are fairly small aircraft, especially compared to conventional commercial aircraft, and are lightweight. As such, they rely on a proper distribution and balance of weight, especially if one finds oneself at 5,000 feet above sea level with the long runway shut down, a 10-20 mph crosswind, and a mountain range rising above the valley floor in the direction of takeoff. So the flight crew, when the cockpit noted a weight disparity, shifted baggage from belly stowage to the overhead compartments in the main cabin. What was apparent is that weight is not an ad hoc measurement. The aircraft’s weight distribution and tolerances are documented–and can be monitored as part of operations.
When engineering an aircraft, each component is assigned its weight. Needless to say, weight is then allocated and measured as part of the development of subsystems of the aircraft. One would not measure the overall weight of the aircraft or end item without ensuring that the components and subsystems did not conform to the weight limitations. The overall weight limitation of an aircraft will very depending on mission and use. If a commercial-type passenger airplane built to takeoff and land from modern runways, weight limitations are not as rigorous. If the aircraft in question is going to takeoff and land from a carrier deck at sea then weight limitations become more critical. (Side note: I also learned these principles in detail while serving on active duty at NAS Norfolk and working with the Navy Air Depot there). Aside from aircraft weight is important in a host of other items–from laptops to ships. In the latter case, of which I am also intimately familiar, weight is important in balancing the ship and its ability to make way in the water (and perform its other missions).
So given that weight is an allocated element of performance within subsystem or component development, we achieve several useful bits of information. First off, we can aggregate and measure weight of the entire end item to track if we are meeting the limitations of the item. Secondly, we can perform trade-off. If a subsystem or component can be made with a lighter material or more efficiently weight-wise, then we have more leeway (maybe) somewhere else. Conversely, if we need weight for balance and the component or subsystem is too light, we need to figure out how to add weight or ballast. So measuring and recording weight is not a problem. Finally, we allocate and tie performance-wise a key technical specification to the work, avoiding subjectivity.
So how to do we show value? We do so by applying the same principles as any other method of earned value. Each item of work is covered by a Work Breakdown Structure (WBS), which is tied (hopefully) to an Integrated Master Schedule (IMS). A Performance Management Baseline (PMB) is applied to the WBS (or sometimes thought a resource-loaded IMS). If we have properly constructed our Integrated Management Plan (IMP) prior to the IMS, we should clearly have tied the relationship of technical measures to the structure. I acknowledge that not every program performs an IMP, but stating so is really an acknowledgement of a clear deficiency in our systems, especially involving complex R&D programs. Since our work is measured in short increments against a PMB, we can claim 100% of a technical specification but be ahead of plan for the WBS elements involved.
It’s not as if the engineers in our industrial activities and aerospace companies have never designed a jet aircraft or some other item before. Quite a bit of expertise and engineering know-how transfers from one program to the next. There is a learning curve. The more information we collect in that regard, the more effective that curve. Hence my emphasis in recent posts on data.
For testing, the approach is the same. A test can fail, that is, a rocket can explode on the pad or suffer some other mishap, but the components involved will succeed or fail based on the after-action report. At that point we will know, through allocation of the test results, where we are in terms of technical performance. While rocket science is involved in the item’s development, recording technical achievement is not rocket science.
Thus, while our measures of effectiveness, measures of performance, measures of progress, and technical performance will determine our actual achievement against a standard, our fiscal assessment of value against the PMB can still reflect whether we are ahead of schedule and below budget. What it takes is an understanding of how to allocate more rigorous measures to the WBS that are directly tied to the technical specifications. To do otherwise is to build a camel when a horse was expected or–as has been recorded in real life in previous programs–to build a satellite that cannot communicate, a Navy aircraft that cannot land on a carrier deck, a ship that cannot fight, and a vaccine that cannot be delivered and administered in the method required. We learn from our failures, and that is the value of failure.
*There are colloquial expressions that allow for 100% to be exceeded, such as exceeding 100% of the tolerance of a manufactured item or system, which essentially means to exceed its limits and, therefore, breaking it.
One thought on “Technical Ecstacy — Technical Performance and Earned Value”
Comments are closed.