What’s Your Number (More on Metrics)

Comments from my last post mainly centered around: but are you saying that we shouldn’t do assessments or analysis?  No.  But it is important to define our terms a bit better and realize that what we monitor is not equal and not measuring the same kind of thing.

As I have written here, our metrics fall into categories but each has a different role or nature and are generally rooted in two concepts.  These concepts are quality assurance (QA) and quality control (QC)–and they are not one and the same.  As our specialties have fallen away over time, the distinction between QA and QC has been lost.  The evidence of this confusion can be found not only in Wikipedia here and here, but also in project management discussion groups such as here and here.

QA measures the quality of the processes involved in the development and production of an end item.  It tends to be a proactive process and, therefore, looks for early warning indicators.

QC measures the quality in the products.  It is a reactive process and is focused on defect correction.

A large part of the confusion as it relates to project management is that QA and QC has its roots in iterative, production-focused activities.  So knowing which subsystems within the overall project management system we are measuring is important in understanding whether it serves a QA or QC purpose, that is, that is has a QA or QC effect.

Generally, in management, we categorize our metrics into groupings based on their purpose.  There are Key Performance Indicators (KPIs), which are categorized as diagnostic indicators, lagging indicators, and leading indicators.  There are Key Risk Indicators (KRIs), which measure future adverse impacts.  KRIs are qualitative and quantitative measures that must be handled or mitigated in our plans.

KPIs and KRIs can serve QA and QC purposes and it is important to know the difference so that we can understand what the metric is telling us.  The dichotomy between these effects is not closed.  QC is meant to drive improvements in our processes so that we can shift (ideally) to QA measures in ensuring that our processes will produce a high quality product.

When it comes to the measurement of project management artifacts, our metrics regarding artifact quality, such those applied to the Integrated Master Schedule (IMS) is actually a measure rooted in QC.  The defect has occurred in a product (the IMS) and now we must go back and fix it.  It is not that QC is not an essential function.

It just seems to me that we are sophisticated enough now to establish systems in the construction of the IMS and other artifacts, that is, to be proactive (avoiding errors), in lieu of being reactive (fixing errors).  And–yes–at the next meetings and conferences I will present some ideas on how to do that.

 

Doin’ It Right (On Scheduling)

Been reading quite a bit about assessments and analysis lately–14 Point Assessment, project health assessments, etc.  Assessment and analysis is a necessary role of oversight, particularly in contracting efforts involving public funds on the part of agencies tasked with that role.  From a project management perspective, however, the only beneficiaries seem to be one-off best-of-breed software tool manufacturers and some consultants who specialize in this sort of thing.

Assessment of the quality of our antecedent project artifacts is a necessary evil only because the defects in those plans undermine our ability to determine what is really happening in the project.  It is but a band-aid–a temporary patch for a more systemic problem, for we must ask ourselves:  how was a schedule that breached several elements of the 14-Point Assessment constructed in the first place?  This is, of course, a rhetorical question and one well known by most, if not all, of my colleagues.

That many of our systems are designed to catch relatively basic defects after the fact and to construct lists to correct them–time and resources that are rarely planned for by project teams–is, in fact, a quantitative indicator of a systemic problem.  There is no doubt, as Glen Alleman said in his most recent post on Herding Cats, that we need to establish intermediate closed systems to assess performance in each of the discrete segments of our plan.  This is Systems Analysis 101.  But these feedback loops are rarely budgeted.  When they are budgeted, as in EVM, it is usually viewed as a regulatory cost that requires documentation that proves the overriding elucidation of benefits.  These benefits would be more generally accepted if the indicators were more clearly tied to cause-and-effect and provided in a timely enough manner for course correction.  But a great deal of effort is still expended in fixing the underlying artifacts on which our analysis depends, well after the fact.  This makes project performance analysis that much harder.

Making corrections to course based on your taking fixes when entering port is an acceptable practice.  Using a wrong or erroneous chart is incompetence.  Thus, there is an alternative way to view this problem and that is to accept no defects in the construction of governing project management planning documents.

In discussing this issue, I have been reminded by colleagues that doing this very thing was the stated purpose of the Integrated Baseline Review when it was first deployed by the Navy in the 1990s.  I served as a member of that first IBR Steering Group when the process was being tested and deployed.  In fact, the initial recommendations of the group was that the IBR was not to be treated or imposed as an external inspection–which is much as it is being applied today–but rather an opportunity to identify risks and opportunities within the program team to ensure that the essential project artifacts: the master plan (IMP), integrated master schedule (IMS), performance management baseline (PMB), risk analysis systems, technical performance plan and milestones, etc., which would eventually inform both the Systems Description (SD) and CAM notebooks were properly constructed.  In addition, the IBR was intended to be reconvened as necessary over the life of a project or program when changes necessitated adjustments to the processes that affected program performance and expectations.

So what is the solution?  I would posit that it involves several changes.

First, is that the artificial dichotomy of the cost and schedule analyst disciplines needs to end, both across the industry and through the professional organizations that support them.  That there is both a College of Performance Management and a multiplicity of schedule-focused organizations–separate and, in many cases, in competition with one another.  It made a great deal of sense to create specialties when these disciplines were still evolving and involved specialized knowledge that caused very high barriers to entry.  But the advancement of information systems have not only broken down these barriers to understanding and utilizing the methods of these specialties, the cross-fertilization of disciplines have provided us insights into the systems we are tasked to manage in ways that seemed impossible just five or six years ago: two to three full software generations over that time.

Second, is that we have allowed well entrenched technology to constrain our view of the possible for too long.  We obviously know that we have come a long way from physically posting time-phased plans on a magnetic board.  But we have also come a long way from being constrained by software technology that limits us to hard-coded applications that do only one thing–whether that one thing be EVM, schedule analysis, technical performance, or based on fixing errors (to finish the analogy) well after we have decided to sail.  All too often the last condition puts us in the shoals.