What’s Your Number (More on Metrics)

Comments from my last post mainly centered around: but are you saying that we shouldn’t do assessments or analysis?  No.  But it is important to define our terms a bit better and realize that what we monitor is not equal and not measuring the same kind of thing.

As I have written here, our metrics fall into categories but each has a different role or nature and are generally rooted in two concepts.  These concepts are quality assurance (QA) and quality control (QC)–and they are not one and the same.  As our specialties have fallen away over time, the distinction between QA and QC has been lost.  The evidence of this confusion can be found not only in Wikipedia here and here, but also in project management discussion groups such as here and here.

QA measures the quality of the processes involved in the development and production of an end item.  It tends to be a proactive process and, therefore, looks for early warning indicators.

QC measures the quality in the products.  It is a reactive process and is focused on defect correction.

A large part of the confusion as it relates to project management is that QA and QC has its roots in iterative, production-focused activities.  So knowing which subsystems within the overall project management system we are measuring is important in understanding whether it serves a QA or QC purpose, that is, that is has a QA or QC effect.

Generally, in management, we categorize our metrics into groupings based on their purpose.  There are Key Performance Indicators (KPIs), which are categorized as diagnostic indicators, lagging indicators, and leading indicators.  There are Key Risk Indicators (KRIs), which measure future adverse impacts.  KRIs are qualitative and quantitative measures that must be handled or mitigated in our plans.

KPIs and KRIs can serve QA and QC purposes and it is important to know the difference so that we can understand what the metric is telling us.  The dichotomy between these effects is not closed.  QC is meant to drive improvements in our processes so that we can shift (ideally) to QA measures in ensuring that our processes will produce a high quality product.

When it comes to the measurement of project management artifacts, our metrics regarding artifact quality, such those applied to the Integrated Master Schedule (IMS) is actually a measure rooted in QC.  The defect has occurred in a product (the IMS) and now we must go back and fix it.  It is not that QC is not an essential function.

It just seems to me that we are sophisticated enough now to establish systems in the construction of the IMS and other artifacts, that is, to be proactive (avoiding errors), in lieu of being reactive (fixing errors).  And–yes–at the next meetings and conferences I will present some ideas on how to do that.

 

Close to the (Leading) Edge

Much has been made in project management circles that most of the indicators that we have to work with are lagging indicators. These are indicators that record what happened but don’t inform the future.  But is that true in most cases?  If not, which indicators are leading and what new leading indicators do we need to be looking at in constructing a project management toolkit–those at the leading edge?

In order to determine this we also need to weed out those measures that are neither lagging nor leading.  These are diagnostic measures that simply indicate the health of a system.  For example, inventory accuracy in the latest warehouse samples, the number of orders successfully fulfilled on the initial contact, etc.  These oftentimes are most effectively used in tracking iterative, industrial-like activities.

Even leading indicators differ.  A leading indicator for a project is not like a leading indicator for larger systems.  Even complex projects possess limited data points for assessing performance.  For some of these projects, where the span of control does not exceed the capabilities of a leader this usually does not matter.  In those cases, very simple indicators are pretty good at determining trends as long as the appropriate measures are used.  It is also at this level that large changes can occur very rapidly since minor adjustments to the project, underlying technology, or personnel dynamics will determine the difference between success and failure.  Thus, our models of social systems at this level often assume rational human behavior or, to use the colloquialism, “all things being equal” in order to predict trends.  This is also the level at which our projects are valid for shorter spans, since overwhelming external or internal events or systems can present risks that the simple system will not be able to fully overcome, when “all things are not equal.”

More complex systems, particularly in project management, are able to employ risk handling methodologies, either overtly or unconsciously.  Needless to say the former usually garners more reliable and positive results.  In this environment the complaint for EVM is that it is “merely looking in the rear view mirror” with the implication that it is not of great value to project or program managers.  But is this entirely true?  I suspect that some of this is excuse peddling because of what I refer to as the Cult of Optimism.

There is no doubt that there is a great deal of emphasis on what has occurred in the past.  Calculations of work performed, resources expended, and time passed; all of these record what has happened.  Many of these are lagging indicators, but they also contain essential information in informing the future.  The key in understanding what is a leading indicator is understanding the dynamics of causation.  Some lagging indicators have overlap.  Some are better at measuring a particular dimension of the project than others.  It is important to know the difference so that the basis for leading indicators can be selected.

For example, regression is a popular method in determining EAC/ETC predictions.  The problem is that these results are often based on very weak correlations and causality.  It is a linear method that is used to measure outcomes in a non-linear system.  This then causes us to introduce other methods to strengthen the credibility of the estimate through engineering and expert opinion.  When possible, parametrics are introduced.  But does this get us there?  I would posit that it doesn’t because it is simply a way of throwing everything at hand at the issue in hopes that something hits the target.  Instead, our methods must be more precise and our selection methods proven. I would posit that starting with something like the Granger causality test in comparing two different time series and determining if one is predictive of another would be useful in weeding out the wheat from the chaff.

For example, years ago I headed a project to identify and integrate technical performance into project management performance measures.  Part of the study we conducted included a retrospective analysis to determine correlations and causality between key technical performance measures over the life of a project and its performance over time.  What we found is that, carefully chosen, TPMs are a strong indicator of future performance when informed by probabilities over the time series.

So is it true that we are “merely” looking in the rear view mirror?  No.  We only have the past to inform the future, thus the comment is irrelevant.  Besides “the past is never dead.  It’s not even past…”  The aggregation of our efforts to achieve the goals of the project will determine the likely outcomes in the future.  So the challenge isn’t looking in the rear view mirror–after all in real life we do this all the time to see what’s gaining on us–but in picking those elements from past performance that, given correlation and causality, will inform the probabilities of future outcomes.

Note:  Grammatical changes made to the post.