When we wake up in the morning we enter the day with a set of assumptions about ourselves, our environment, and the world around us. So too when we undertake projects. I’ve just returned from the latest NDIA IPMD meeting in Washington, D.C. and the most intriguing presentation at the meeting was given by Irv Blickstein regarding a RAND root cause analysis of major program breaches. In short, a major breach in the cost of a program is defined by the Nunn-McCurdy amendment that was first passed in 1982, in which a major defense program breaches its projected baseline cost by more than 15%.
The issue of what constitutes programmatic success and failure has generated a fair amount of discussion among the readers of this blog. The report, which is linked above, is full of useful information regarding Major Defense Acquisition Program (also known as MDAP) breaches under Nunn-McCurdy, but for purposes of this post readers should turn to page 83. In setting up a project (or program), project/program managers must make a set of assumptions regarding the “uncertain elements of program execution” centered around cost, technical performance, and schedule. These assumptions are what are referred to as “framing assumptions.”
A framing assumption is one in which there are signposts along the way to determine if an assumption regarding the project/program has changed over time. Thus, according to the authors, the precise definition of a framing assumption is “any explicit or implicit assumption that is central in shaping cost, schedule, or performance expectations.” An interesting aspect of their perspective and study is that the three-legged stool of program performance relegates risk to serving as a method that informs the three key elements of program execution, not as one of the three elements. I have engaged in several conversations over the last two weeks regarding this issue. Oftentimes the question goes: can’t we incorporate technical performance as an element of risk? Short answer: No, you can’t (or shouldn’t). Long answer: risk is a set of methods for overcoming the implicit invalidity of single point estimates found in too many systems being used (like estimates-at-complete, estimates-to-complete, and the various indices found in earned value management, as well as a means of incorporating qualitative environmental factors not otherwise categorizable), not an element essential to defining the end item application being developed and produced. Looked at another way, if you are writing a performance specification, then performance is a key determinate of program success.
Additional criteria for a framing assumption are also provided in the RAND study. These are that the assumptions must be determinative, that is, the consequences of the assumption being wrong significantly affects the program in an essential way. They must also be unmitigable, that is, the consequences of the assumption being wrong are unavoidable. They must be uncertain, that is, the outcome or certainty of it being right or wrong cannot be determined in advance. They must be independent and not dependent on another event or series of events. Finally, they must be distinctive, in setting the program apart from other efforts.
RAND then applied the framing assumption methodology to a number of programs. The latest NDIA meeting was an opportunity to provide an update of conclusions based on the work first done in 2013. What the researchers found was that framing assumptions which are kept at a high level, be developed early in a program’s life cycle, and should be reviewed on a regular basis to determine validity. They also found that a program breached the threshold when a framing assumption became invalid. Project and program managers, and requirements personnel have at least intuitively known this for quite some time. Over the years, this is the reason given for requirements changes and contract modifications over the course of development that result in cost, performance, and schedule impacts.
What is different about the RAND study is that they have outlined a practical process for making these determinations early enough for a project/program to be adjusted with changing circumstances. For example, the numbers of framing assumptions of all MDAPs in the study could be boiled down to four or five, which are easily tested against reality during the milestone and other reviews held over the course of a program. This is particularly important given the lengthened time-frames of major acquisitions from development to production.
Looking at these results, my own observation is that this is a useful tool for identifying course corrections that are needed before they manifest into cost and schedule impacts, particularly given that leadership at PARCA has been stressing agile acquisition strategies. The goal here, it seems, is to allow for course corrections before the inertia of the effort leads to failure or–more likely–the development and deployment of an end item that does not entirely meet the needs of the Defense Department. (That such “disappointments” often far outstrip the capabilities of our adversaries is a topic for a different post).
I think the court is still out on whether course corrections, given the inertia of work and effort already expended at the point that a framing assumption would be tested as invalid, can ever truly be offsetting to the point of avoiding a breach, unless we then rebrand the existing effort as a new program once it has modified its structure to account for new framing assumptions. Study after study has shown that project performance is pretty well baked in at the 20% mark. For MDAPs, much of the front-loaded efforts in technology selection and application have been made. After all, systems require inputs and to change a system requires more inputs, not less, to overcome the inertia of all of the previous effort, not to mention work in progress. This is basic physics whether we are dealing with physical systems or complex adaptive (economic) systems.
Certainly, more efficient technology that affects the units of measurement within program performance can result in cost savings or avoidance, but that is usually not the case. There is a bit of magical thinking here: that commercial technologies will provide a breakthrough to allow for such a positive effect. This is an ideological idea not borne out by reality. The fact is that most of the significant technological breakthroughs we have seen over the last 70 years–from the microchip to the internet and now to drones–have resulted from public investments, sometimes in public-private ventures, sometimes in seeded technologies that are then released into the public domain. The purpose of most developmental programs is to invest in R&D to organically develop technologies (utilizing the talents of the quasi-private A&D industry) or provide economic incentives to incorporate technologies that do not currently exist.
Regardless, the RAND study has identified an important concept in determining the root causes of overruns. It seems to me that a formalized process of identifying framing assumptions should be applied and done at the inception of the program. The majority of the assessments to test the framing assumptions should then need to be made prior to the 20% mark as measured by program schedule and effort. It is easier and more realistic to overcome the bow-wave of effort at that point than further down the line.
Note: I have modified the post to clarify my analysis of the “three-legged stool” of program performance in regard to where risk resides.