Glen Alleman, a leading consultant in program management (who also has a blog that I follow), Tom Coonce of the Institute for Defense Analyses, and Rick Price of Lockheed Martin, have jointly published a new paper in the College of Performance Management’s Measureable News entitled “Building A Credible Performance Measurement Baseline.”
The elements of their proposal for constructing a credible PMB, from my initial reading, are as follows:
1. Rather than a statement of requirements, decision-makers should first conduct a capabilities gap analysis to determine the units of effectiveness and performance. This ensures that program management decision-makers have a good idea of what “done” looks like, and ensures that performance measurements aren’t disconnected from these essential elements of programmatic success.
2. Following from item 1 above, the technical plan and the programmatic plan should always be in sync.
3. Earned value management is but one of many methods for assessing programmatic performance in its present state. At least that is how I interpret what the are saying, because later in their paper they propose a way to ensure that EVM does not stray from the elements that define technical achievement. But EVM in itself is not the end-all or be-all of performance management–and fails in many ways to anticipate where the technical and programmatic plans diverge.
4. All work in achieving the elements of effectiveness and performance are first constructed and given structure in the WBS. Thus, the WBS ties together all elements of the project plan. In addition, technical and programmatic risk must be assessed at this stage, rather than further down the line after the IMS has been constructed.
5. The Integrated Master Plan (IMP) is constructed to incorporate the high level work plans that are manifested through major programmatic events and milestones. It is through the IMP that EVM is then connected to technical performance measures that affect the assessment of work package completion that will be reflected in the detailed Integrated Master Schedule (IMS). This establishes not only the importance of the IMP in ensuring the linkage of technical and programmatic plans, but also makes the IMP an essential artifact that has all too often be seen as optional, which probably explains why so many project managers are “surprised” when they construct aircraft that can’t land on the deck of a carrier or satellites that can’t communicate in orbit, though they are well within the tolerance bands of cost and schedule variances.
6. Construct the IMS taking into account the technical, qualitative, and quantitative risks associated with the events and milestones identified in the IMP. Construct risk mitigation/handling where possible and set aside both cost and schedule margins for irreducible uncertainties, and management reserve (MR) for reducible risks, keeping in mind that margin is within the PMB but MR is above the PMB but within the CBB. Furthermore, schedule margin should be transitioned from a deterministic one to a probabilistic one–constructing sufficient margin to protect essential activities. Cost margin in work packages should also be constructed in the same manner-based on probabilistic models that determine the chances of making a risk reducible until reaching the point of irreducibility. Once again, all of these elements tie back to the WBS.
7. Cost and schedule margin are not the same as slack or float. Margin is reserve. Slack or float is equivalent to overruns and underruns. The issue here in practice is going to be to get the oversight agencies to leave margin alone. All too often this is viewed as “free” money to be harvested.
8. Cost, schedule, and technical performance measurement, tied together at the elemental level of work–informing each other as a cohesive set of indicators that are interrelated–and tied back to the WBS, is the only valid method of ensuring accurate project performance measurement and the basis for programmatic success.
Most interestingly, in conclusion the authors present as a simplified case an historical example how their method proves itself out as both a common sense and completely reasonable approach, by using the Wright brothers’ proof of concept for the U.S. Army in 1908. The historical documents in that case show that the Army had constructed elements of effectiveness and performance in determining whether they would purchase an airplane from brothers. All measures of project success and failure would be assessed against those elements–which combined cost, schedule, and technical achievement. I was particularly intrigued that the issue of weight of the aircraft was part of the assessment–a common point of argument from critics of the use of technical performance–where it is demonstrated in the paper how the Wright brothers actually assessed and mitigated the risk associated with that measure of performance over time.
My initial impression of the paper is that it is a significant step forward in bringing together all of the practical lessons learned from both the successes and failures of project performance. Their recommendations are a welcome panacea to many of the deficiencies implicit in our project management systems and procedures.
I also believe that as an integral part of the process in construction of the project artifacts, that it is a superior approach than the one that I initially proposed in 1997, which assumed that TPM would always be applied as an additional process that would inform cost and schedule at the end of each assessment period. I look forward to hearing the presentation at the next Integrated Program Management Conference, at which I will attempt some live blogging.