Been reading quite a bit about assessments and analysis lately–14 Point Assessment, project health assessments, etc. Assessment and analysis is a necessary role of oversight, particularly in contracting efforts involving public funds on the part of agencies tasked with that role. From a project management perspective, however, the only beneficiaries seem to be one-off best-of-breed software tool manufacturers and some consultants who specialize in this sort of thing.
Assessment of the quality of our antecedent project artifacts is a necessary evil only because the defects in those plans undermine our ability to determine what is really happening in the project. It is but a band-aid–a temporary patch for a more systemic problem, for we must ask ourselves: how was a schedule that breached several elements of the 14-Point Assessment constructed in the first place? This is, of course, a rhetorical question and one well known by most, if not all, of my colleagues.
That many of our systems are designed to catch relatively basic defects after the fact and to construct lists to correct them–time and resources that are rarely planned for by project teams–is, in fact, a quantitative indicator of a systemic problem. There is no doubt, as Glen Alleman said in his most recent post on Herding Cats, that we need to establish intermediate closed systems to assess performance in each of the discrete segments of our plan. This is Systems Analysis 101. But these feedback loops are rarely budgeted. When they are budgeted, as in EVM, it is usually viewed as a regulatory cost that requires documentation that proves the overriding elucidation of benefits. These benefits would be more generally accepted if the indicators were more clearly tied to cause-and-effect and provided in a timely enough manner for course correction. But a great deal of effort is still expended in fixing the underlying artifacts on which our analysis depends, well after the fact. This makes project performance analysis that much harder.
Making corrections to course based on your taking fixes when entering port is an acceptable practice. Using a wrong or erroneous chart is incompetence. Thus, there is an alternative way to view this problem and that is to accept no defects in the construction of governing project management planning documents.
In discussing this issue, I have been reminded by colleagues that doing this very thing was the stated purpose of the Integrated Baseline Review when it was first deployed by the Navy in the 1990s. I served as a member of that first IBR Steering Group when the process was being tested and deployed. In fact, the initial recommendations of the group was that the IBR was not to be treated or imposed as an external inspection–which is much as it is being applied today–but rather an opportunity to identify risks and opportunities within the program team to ensure that the essential project artifacts: the master plan (IMP), integrated master schedule (IMS), performance management baseline (PMB), risk analysis systems, technical performance plan and milestones, etc., which would eventually inform both the Systems Description (SD) and CAM notebooks were properly constructed. In addition, the IBR was intended to be reconvened as necessary over the life of a project or program when changes necessitated adjustments to the processes that affected program performance and expectations.
So what is the solution? I would posit that it involves several changes.
First, is that the artificial dichotomy of the cost and schedule analyst disciplines needs to end, both across the industry and through the professional organizations that support them. That there is both a College of Performance Management and a multiplicity of schedule-focused organizations–separate and, in many cases, in competition with one another. It made a great deal of sense to create specialties when these disciplines were still evolving and involved specialized knowledge that caused very high barriers to entry. But the advancement of information systems have not only broken down these barriers to understanding and utilizing the methods of these specialties, the cross-fertilization of disciplines have provided us insights into the systems we are tasked to manage in ways that seemed impossible just five or six years ago: two to three full software generations over that time.
Second, is that we have allowed well entrenched technology to constrain our view of the possible for too long. We obviously know that we have come a long way from physically posting time-phased plans on a magnetic board. But we have also come a long way from being constrained by software technology that limits us to hard-coded applications that do only one thing–whether that one thing be EVM, schedule analysis, technical performance, or based on fixing errors (to finish the analogy) well after we have decided to sail. All too often the last condition puts us in the shoals.