A discussion at the LinkedIn site for the NDIA IPMD regarding schedule margin has raised some good insight and recommendations for this aspect of project planning and execution. Current guidance from the U.S. Department of Defense for those engaged in the level of intense project management that characterizes the industry has been somewhat vague and open to interpretation. Some of this, I think, is due to the competing proprietary lexicon from software manufacturers that have been dominant in the industry.
But mostly the change in defining this term is due to positive developments. That is, the change is due to the convergence garnered from long experience among the various PM disciplines that allow us to more clearly define and distinguish between schedule margin, schedule buffer, schedule contingency, and schedule reserve. It is also due to the ability of more powerful software generations to actually apply the concept in real planning without it being a thumb in the air-type exercise.
Concerning this topic, Yancy Qualls of Bell Helicopter gave an excellent presentation at the January NDIA IPMD meeting in Tucson. His proposal makes a great deal of sense and, I think, is a good first step toward true integration and a more elegant conceptual solution. In his proposal, Mr. Qualls clearly defines the scheduling terminology by drawing analogies to similar concepts on the cost side. This construction certainly overcomes a lot of misconceptions about the purpose and meaning of these terms. But, I think, his analogies also imply something more significant and it is this: that there is a common linkage between establishing management reserve and schedule reserve, and there are cost/schedule equivalencies that also apply to margin, buffer, and contingency.
After all, resources must be time-phased and these are dollarized. But usually the relationship stops there and is distinguished by that characteristic being measured: measures of value or measures of timing; that is, the value of the work accomplished against the Performance Management Baseline (PMB) is different from the various measures of progress recorded against the Integrated Master Schedule (IMS). This is why we look at both cost and schedule variances on the value of work performed from a cost perspective, and physical accomplishment against time. These are fundamental concepts.
To date, the most significant proposal advanced to reconcile the two different measures was put forth by Walt Lipke of Oklahoma City Air Logistics Center in the method known as earned schedule. But the method hasn’t been entirely embraced. Studies have shown that it has its own limitations, but that it is a good supplement those measures currently in use, not a substitute for them.
Thus, we are still left with the need of making a strong, logical, and cohesive connection between cost and schedule in our planning. The baseline plans constructed for both the IMS and PMB do not stand apart or, at least, should not. They are instead the end result of a continuum in the construction of our project systems. As such, there should be a tie between cost and schedule that allows us to determine the proper amount of margin, buffer, and contingency in a manner that is consistent across both sub-system artifacts.
This is where risk comes in and the correct assessment of risk at the appropriate level of measurement, given that our measures of performance are being measured against different denominators. For schedule margin, in Mr. Qualls’ presentation, it is the Schedule Risk Analysis (SRA). But this then leads us to look at how that would be done.
Fortuitously, during this same meeting, Andrew Uhlig of Raytheon Missile Systems gave an interesting presentation on historical SRA results, building models from such results, and using them to inform current projects. What I was most impressed with in this presentation was that his research finds that the actual results from schedule performance do not conform to any of the usual distribution curves found in the standard models. Instead of normal, triangle, or pert distributions, what he found is a spike, in which a large percentage of the completions fell exactly on the planned duration. Thus, distribution was skewed around the spike, with the late durations–the right tail–much longer than the left.
What is essential about the work of Mr. Uhlig is that, rather than using small samples with their biases, he using empirical data to inform his analysis. This is a pervasive problem in project management. Mr. Qualls makes this same point in his own presentation, using the example of the Jordan-era Chicago Bulls as an example, where each subsequent win–combined with probabilities that show that the team could win all 82 games–does not mean that they will actually perform the feat. In actuality (and in reality) the probability of this occurring is quite small. Glen Alleman at his Herding Cats blog covers this same issue, emphasizing the need for empirical data.
The results of the Uhlig presentation are interesting, not only because they throw into question the results using the three common distributions used in schedule risk analysis under simulated Monte Carlo, but also because they may suggest, in my opinion, an observation or reporting bias. Discrete distribution methods, as Mr. Uhlig proposes, will properly model the distribution for such cases using our parametric analysis. But they will not reflect the quality of the data collected.
Short duration activities are designed to overcome subjectivity through their structure. The shorter the duration, the more discrete the work being measured, the less likely occurrence of “gaming” the system. But if we find, as Mr. Uhlig does, that 29% of 20 day activities report exactly 20 days, then there is a need to test the validity of the spike itself. It is not that it is necessarily wrong. Perhaps the structure of the short duration combined with the discrete nature of the linkage to work has done its job. One would expect a short tail to the left and a long tail to the right of the spike. But there is also a possibility that variation around the target duration is being judged as “close enough” to warrant a report of completion at day 20.
So does this pass the “So What?” test? Yes, if only because we know that the combined inertia of all of the work performed at any one time on the project will eventually be realized in the form of a larger amount of risk in proportion to the remaining work. If the reported results are pushing risk to the right because the reported performance is optimistic against the actual performance, then we will get false positives. If the actual performance is pessimistic against actual performance–a less likely scenario in my opinion–then we will get false negatives.
But regardless of these further inquiries that I think need to be made regarding the linkage between cost and schedule, and the validity of results from SRAs, we now have two positive steps in the right direction in clarifying areas that in the past have perplexed project managers. Properly identifying schedule reserve, margin, buffer, and contingency, combined with properly conducting SRAs using discrete distributions based on actual historical results will go quite far in allowing us to introduce better predictive measures in project management.