A lot of ink has been devoted to what constitutes “best practices” in PM but oftentimes these discussions tend to get diverted into overtly commercial activities that promote a set of products or trademarked services that in actuality are well-trod project management techniques given a fancy name or acronym. We see this often with “road shows” and “seminars” that are blatant marketing events. This tends to undermine the desire of PM professionals to find out what really gives us good information by both getting in the way of new synergies and by tying “best practices” to proprietary solutions. All too often “common practice” and “proprietary limitations” pass for “best practice.”
Recently I have been involved in discussions and the formulation of guides on indicators that tell us something important regarding the condition of the project throughout its life cycle. All too often the conversation settles on earned value with the proposition that all indicators lead back to it. But this is an error since it is but one method for determining performance, which looks solely at one dimension of the project.
There are, after all, other obvious processes and plans that measure different dimensions of project performance. The first such example is schedule performance. A few years ago there was an attempt to more closely tie schedule and cost as an earned value metric, which was and is called “earned schedule.” In particular, it had many strengths against what was posited as its alternative–schedule variance as calculated by earned value. But both are a misnomer, even when earned schedule is offered as an alternative to earned value while at the same time adhering to its methods. Neither measures schedule, that is, time-based performance against a plan consisting of activities. The two artifacts can never be reconciled and reduced to one metric because they measure different things. The statistical measure that would result would have no basis in reality, adding an unnecessary statistical layer that obfuscates instead of clarifying the underlying condition. So what do we look at you may ask? Well–the schedule. The schedule itself contains many opportunities to measure its dimension in order to develop useful metrics and indicators.
For example, a number of these indicators have been in place for quite some time: Baseline Execution Index (BEI), Critical Path Length Index (CPLI), early start/late start, early finish/late finish, bow-wave analysis, hit-miss indices, etc. These all can be found in the literature, such as here and here and here.
Typically, then, the first step toward integration is tying these different metrics and indicators of the schedule and EVM dimensions at an appropriate level through the WBS or other structures. The juxtaposition of these differing dimensions, particularly in a grid or GANTT, gives us the ability to determine if there is a correlation between the various indicators. We can then determine–over time–the strength and consistency of the various correlations. Further, we can take this one step further to conclude which ones lead us to causation. Only then do we get to “best practice.” This hard work to get to best practice is still in its infancy.
But this is only the first step toward “integrated” performance measurement. There are other areas of integration that are needed to give us a multidimensional view of what is happening in terms of project performance. Risk is certainly one additional area–and a commonly heard one–but I want to take this a step further.
For among my various jobs in the past included business management within a project management organization. This usually translated into financial management, but not traditional financial management that focuses on the needs of the enterprise. Instead, I am referring to project financial management, which is a horse of a different color, since it is focused at the micro-programmatic level on both schedule and resource management, given that planned activities and the resources assigned to them must be funded.
Thus, having the funding in place to execute the work is the antecedent and, I would argue, the overriding factor to project success. Outside of construction project management, where the focus on cash-flow is a truism, we see this play out in publicly funded project management through the budget hearing process. Even when we are dealing with multiyear R&D funding the project goes through this same process. During each review, financial risk is assessed to ensure that work is being performed and budget (program) is being executed. Earned value will determine the variance between the financial plan and the value of the execution, but the level of funding–or cash flow–will determine what gets done during any particular period of time. The burn rate (expenditure) is the proof that things are getting done, even if the value may be less than what is actually expended.
In public funding of projects, especially in A&D, the proper “color” of money (R&D, Operations & Maintenance, etc.) available at the right time oftentimes is a better predictor of project success than the metrics and indicators which assume that the planned budget, schedule, and resources will be provided to support the baseline. But things change, including the appropriation and release of funds. As a result, any “best practice” that confines itself to only one or two of the dimensions of project assessment fail to meet the definition.
In the words of Gus Grissom in The Right Stuff, “No bucks, no Buck Rogers.”