Close to the (Leading) Edge

Much has been made in project management circles that most of the indicators that we have to work with are lagging indicators. These are indicators that record what happened but don’t inform the future.  But is that true in most cases?  If not, which indicators are leading and what new leading indicators do we need to be looking at in constructing a project management toolkit–those at the leading edge?

In order to determine this we also need to weed out those measures that are neither lagging nor leading.  These are diagnostic measures that simply indicate the health of a system.  For example, inventory accuracy in the latest warehouse samples, the number of orders successfully fulfilled on the initial contact, etc.  These oftentimes are most effectively used in tracking iterative, industrial-like activities.

Even leading indicators differ.  A leading indicator for a project is not like a leading indicator for larger systems.  Even complex projects possess limited data points for assessing performance.  For some of these projects, where the span of control does not exceed the capabilities of a leader this usually does not matter.  In those cases, very simple indicators are pretty good at determining trends as long as the appropriate measures are used.  It is also at this level that large changes can occur very rapidly since minor adjustments to the project, underlying technology, or personnel dynamics will determine the difference between success and failure.  Thus, our models of social systems at this level often assume rational human behavior or, to use the colloquialism, “all things being equal” in order to predict trends.  This is also the level at which our projects are valid for shorter spans, since overwhelming external or internal events or systems can present risks that the simple system will not be able to fully overcome, when “all things are not equal.”

More complex systems, particularly in project management, are able to employ risk handling methodologies, either overtly or unconsciously.  Needless to say the former usually garners more reliable and positive results.  In this environment the complaint for EVM is that it is “merely looking in the rear view mirror” with the implication that it is not of great value to project or program managers.  But is this entirely true?  I suspect that some of this is excuse peddling because of what I refer to as the Cult of Optimism.

There is no doubt that there is a great deal of emphasis on what has occurred in the past.  Calculations of work performed, resources expended, and time passed; all of these record what has happened.  Many of these are lagging indicators, but they also contain essential information in informing the future.  The key in understanding what is a leading indicator is understanding the dynamics of causation.  Some lagging indicators have overlap.  Some are better at measuring a particular dimension of the project than others.  It is important to know the difference so that the basis for leading indicators can be selected.

For example, regression is a popular method in determining EAC/ETC predictions.  The problem is that these results are often based on very weak correlations and causality.  It is a linear method that is used to measure outcomes in a non-linear system.  This then causes us to introduce other methods to strengthen the credibility of the estimate through engineering and expert opinion.  When possible, parametrics are introduced.  But does this get us there?  I would posit that it doesn’t because it is simply a way of throwing everything at hand at the issue in hopes that something hits the target.  Instead, our methods must be more precise and our selection methods proven. I would posit that starting with something like the Granger causality test in comparing two different time series and determining if one is predictive of another would be useful in weeding out the wheat from the chaff.

For example, years ago I headed a project to identify and integrate technical performance into project management performance measures.  Part of the study we conducted included a retrospective analysis to determine correlations and causality between key technical performance measures over the life of a project and its performance over time.  What we found is that, carefully chosen, TPMs are a strong indicator of future performance when informed by probabilities over the time series.

So is it true that we are “merely” looking in the rear view mirror?  No.  We only have the past to inform the future, thus the comment is irrelevant.  Besides “the past is never dead.  It’s not even past…”  The aggregation of our efforts to achieve the goals of the project will determine the likely outcomes in the future.  So the challenge isn’t looking in the rear view mirror–after all in real life we do this all the time to see what’s gaining on us–but in picking those elements from past performance that, given correlation and causality, will inform the probabilities of future outcomes.

Note:  Grammatical changes made to the post.