Take Me To The River, Part 3, Technical Performance and Risk Management Digital Elements of Integrated Program Management

Part three of this series of articles on the elements of Integrated Program and Project Management will focus on two additional areas of IPM: technical performance and risk management. Prior to jumping in, however–and given the timeframe over which I’ve written this series–a summary to date is in order.

The first part of our exploration into IPM digital inventory concerned cost elements. Cost in this sense was broadly defined as any cost elements that need to be of interest to a project or program managers and their  teams. I first clarified our terms by defining the differences between project and program management–and how those differences will influence our focus. Then I outlined the term cost as falling into the following categories:

  1. Contract costs and the cost categories within the organizational hierarchy;
  2. Cost estimates, “colors” of money where such distinctions exist, and cashflow;
  3. Additional costs that relate to the program or project effort that are not always directly attributed to the effort, such as PMA, furnished materials or labor, corollary and supporting efforts on the part of the customer, and other overhead and G&A type costs;
  4. Contract cost performance under earned value management (EVM); and
  5. Portfolio management considerations and total cost of ownership.

The second part of this exposition concerned schedule elements, that is, time-phased planning and performance that is essential to any project or program effort. The article first discussed the primacy of the schedule in project and program planning and execution, given its ties in defining the basis for the cost elements addressed in the first part of the series. I then discussed the need for integrated planning as the basis for a valid executable schedule and PMB, the detailed elements and citations of the sources of that information in the literature and formal guidance, the role of framing assumptions in the construction of schedule and cost plans with its holistic approach to go/no-go decision-making, and, finally, the role of the schedule in establishing the project and program battle rhythm.

Now, in this final section, we will determine the other practical elements of IPM beyond even my expansive view of cost and schedule integration.

Technical Performance Management

Given this paper that resulted from a programmatic effort in Navy regarding Technical Performance Management (TPM), it is probably not surprising that I will start here. My core paper in the link above represents what I viewed as an initial effort at integration of TPM to determine impacts of that performance within program cost performance (EVM) projections. But this approach was based on the following foundations:

a. That the solution needed to tie technical achievement to EVM so that it represented greater fidelity to performance than what I viewed as indirect and imprecise methods; such as WBS elements that contained partial or tangential relationships to technical performance measures, and more subjective and arbitrary methods, such as percent complete.

b. That the approach needed to be tied to established systems engineering methods of technical risk management.

c. That the solution should be simple to implement and be statistically valid in its results, tested by retrospective analyses that performed forensic what-if analysis against the ultimate results.

One need only to look at the extensive bibliography that accompanied my paper to understand that there were clear foundations for TPM, but it remained–and in some quarters remains–a controversial concept that provoked resistance, though programs clearly note achievement of technical requirements. For example, the foundations of technical risk management and tracking that the paper cited were in use at what was Martin Marietta for many years. Thus, why the resistance to change?

First, I think, is that the domain of project performance has rested too long in the hands of the EVM community with its historical foundations in cost and financial management, with a risk averse approach to new innovations. Second, given this history, the natural differences between program management, systems engineering, and earned value SMEs created a situation where there just wasn’t the foundation necessary for any one group to take ownership of this development in systems and business intelligence improvement. Even in industry, such cross-domain initiatives tend to initially garner both skepticism, if not outright cynicism, and resistance by personnel unsure of how the new measures will affect assessment of their work.

But keep in mind that, dating myself a bit, this is the same type of reaction that organizations experienced during the first wave of digitization of work. The reaction to each initiative that I witnessed, from the introduction of desktop computers connected to a central server, to the introduction of the first PCs, to the digitization of work products were met with the common refrain at the time that it was too experimental, or too transient, or too unstable, or too unproven, until it wasn’t any of those things.

I also overstate this resistance a bit. Over the last 20 years organizations within the military services adopted this method–or a variation–of TPM integration, as have some commercial companies. Furthermore, thinking and contributions on TPM have advanced in the intervening years.

The elements of technical performance management can be found in the language of the scope being planned. The brilliant paper authored by Glen B. Alleman, Thomas J. Coonce, and Rick A. Price entitled “Building a Credible Performance Measurement Baseline”, establishes the basis for tying project and program performance to technical achievement. These elements are measures of effectiveness (MoEs), measures of performance (MoPs), technical performance measures (TPMs), and key performance parameters and indicators (KPPs and KPIs). Taken together these define the framing assumptions for the project or program.

When properly constructing the systems, procedures, and artifacts from the decomposition of planning documents and performance language, the proper assignment of these elements to the WBS and specific work packages establishes a strong foundation for tying project and program success to both overall technical performance and the framing assumptions implicit in the effort.

What this means is that there also may be a technical performance baseline, which acts in parallel to the cost-focused performance management baseline. This technical performance baseline is the same as the work that is planned at the work package level for planned work. The assessment of progress is further decomposed to look at the timeframe at that point of progress within the context of the integrated master schedule (the IMS). We ask ourselves as a function of risk: what is the chance of achieving the next threshold in our technical performance plan?

As with all elements of work, our MoEs, MoPs, TPMs, KPPs, and KPIs do not reside at the same level of overall performance management and tracking within the WBS hierarchy. Some can be tracked to the lowest level, usually at work package, some will have contributions from lower levels and be summarized at the control account level, and others are at the total project or program level, with contributors from specific lower levels of the WBS structure.

A common example of what is claimed is a difficult technical performance measure is the factor of weight in aircraft design and production. Weight is an essential factor and must be in alignment with the mission of the aircraft. For example, if an aircraft is being built for the Navy, chances are high that the expectation is for it to be able to take off and land on a moving carrier deck. Take off requires coming up to airspeed very quickly. Landings are especially hard, since they are essentially controlled crashes augmented by an arresting gear. Airframes, avionics, and engines must operate in a salt water environment that involves a metal ship. The electro-magnetic effects alone, if they are not mitigated in the design and systems on both aircraft and ship, will significantly degrade the ability of the aircraft to operate as intended. Controlling weight in this case is essential, especially when one considers the need for fuel, ordnance, and avoiding being detected and shot down.

In current practice, the process of tracking weight over the life of aircraft design and development is tightly controlled. It is a function of tradeoff analysis and decision-making with contributors from many sub-elements of the WBS hierarchy. Thus, the use of the factor of weight as an argument to defeat the need to tightly integrate technical measures to the performance measurement baseline is a canard. On the contrary, it is an argument for tighter and broader integration of IPM data and, in particular, ties our systems to–and thus making the projections and the basis of our decision-making a function of– risk management, which is the next topic.

Risk Management Elements and Integration

There is a good deal of literature on risk, so I will confine this section to how risk in terms of integrated project and program management.

For many subdomains within the project and program management, when one mentions the term “risk management” the view often encountered is that the topic at hand is applying Monte Carlo analysis using non-random random numbers to the integrated master schedule (IMS) to determine the probabilities of a range of task durations and completions. This is known as a Schedule Risk Analysis or SRA.

Most of the correlations today are based on the landmark paper by Philip M. Lurie and Matthew S. Goldberg with the sexy title, “An approximate method for sampling correlated random variables from partially specified distributions”. With Monte Carlo informed by Lurie-Goldberg (for short) we then can make inferences as to alternative critical paths and near-critical paths for time-phasing our work. Also, the contribution of each task in terms of its criticality and contribution to the critical path can be measured. Sensitivity analysis elements identifies the most critical risk elements.

If the integrated master schedule is truly integrated to resource and cost, Lurie-Goldberg allows us to defeat the single-point estimate heavy projections of EVM to calculate a range of cost outcomes by probability distribution. This same type of analysis can be done against the time-phased PMB.

But that is just one area of risk management, which is known as quantitative risk. Another area of risk which should be familiar to project and program managers is qualitative risk. The project and programmatic risk analysis of qualitative risk involves the following steps:

1. Risk identification

2. Risk evaluation

3. Risk handling, and

4. Continual risk management

This is a closed loop system, which garners a risk register, risk ranking, a risk matrix, risk handling and mitigation plans, and a risk handling waterfall chart. These artifacts of risk analysis will also require the monitoring of risk triggers, and cross-referencing to risk ownership.

Once again, though cost impacts are also calculated, with their probability of manifesting, the strongest tie of risk management begins with the integrated master schedule. Thus, conditional and probabilistic branching will provide the project and program team with a step-by-step what-if? analysis that provides alternative schedules that will also provide ranges of cost impact.

Mainstreaming Risk Management and TPM into IPM

In reality, project and program management is simply monitoring and forecasting without technical performance and risk management. Yet, these sub-domains are oftentimes confined to a few specialists or viewed as a dichotomous and independent processes under the general duties of the team.

The economic urgency and essentiality of integrated project and program management is the realization that technical achievement of the product, and the assessment and handling of risks along the course of that achievement, are at the core of project and program management.

Take Me To The River, Part 2, Schedule Elements–A Digital Inventory of Integrated Program Management Elements

Recent attendance at various forums to speak has interrupted the flow of this series on IPM elements. Among these venues I was engaged in discussions regarding this topic, as well as the effects of acquisition reform on the IT, program, and project management communities in the DoD and A&D marketplace.

For this post I will restrict the topic to what are often called schedule elements, though that is a nebulous term. Also, one should not draw a conclusion that because I am dealing with this topic following cost elements, that it is somehow inferior in importance to those elements. On the contrary, planning and scheduling are integral to applying resources and costs, in tracking cost performance, and in our systemic analysis its activities, artifacts, and elements are antecedent to cost element considerations.

The Relative Position of Schedule

But the takeaway here is this: under no circumstances should any program or project manager believe that cost and schedule systems represent a dichotomy, nor a hierarchy, of disciplines. They are interdependent and the behavior noted in one will be manifested in the other.

This is important to keep in mind, because the software industry, more than any other, has been responsible for reinforcing and solidifying this (erroneous) perspective. During the first generation of desktop application development, software solutions were built to automate the functions of traditional line and staff functions. This made a great deal of sense.

From a sales and revenue perspective, it is easier to sell a limited niche software “tool” to an established customer base that will ensure both quick acceptance and immediate realization of productivity and labor savings. The connection from the purchase to ROI was easily traceable in the time span and at the level of the person performing their workaday tasks.

Thus, solutions were built to satisfy the needs of cost analysts, schedule analysts, systems engineers, cost estimators, and others. Where specific solutions left gaps, such spreadsheet solutions such as Microsoft Excel were employed to fill them. It was in no one’s interest to go beyond their core competency. Once a dominant or set of dominant incumbents (a monoposony) inhabited a niche, they employed the usual strategies for “stickiness” to defend territory and raise barriers to new entries.

What was not anticipated by many organizations was the fact that once you automate a function that the nature of the system, if one is to implement the most effective organizational structure, is transformed to conform to the most efficient flow and use of data–and its resulting transformation into information and intelligence. Oftentimes the skill set to use the intelligence does not exist because the resulting insights and synergy involved in taking larger and more comprehensive datasets which themselves are more credible and accurate was not anticipated in adjusting the organizational structure.

This is changing and must change, because the old way of using limited sets of data in the age of big(ger) data that provide a more comprehensive view of business conditions is not tenable. At least, not if a company or organization wants to stay relevant or profitable.

Characteristics and Basic Elements of the Project Schedule

If one were to perform a Google search of project schedule while reading this post, you would find a number of definitions, some of which overlap. For example, the PMBOK defines a schedule as, quite simply, “the planned dates for performing activities and the planned dates for meeting milestones.”

Thus our elements include planned dates, activities, and milestones. But is that all? Under this definition, any kind of plan, from a minor household renovation or upgrade to building an aircraft carrier would contain only these elements.

I don’t think so.

For complex projects and programs, which is the focus on this blog, our definition of a project schedule is a bit more comprehensive. If you go to the aforementioned A Guide for DoD Program Managers mentioned in my last post, you will find even less specificity.

The reason for this is that what we define as a project schedule is part and parcel of the planning phase of a project, which is then further specified in the specific time-phased planning elements for execution of the project through its lifespan into production. It is the schedule that ties together all of the disciplines in putting together a project–acquisition, systems engineering, cost estimating, and project performance management.

In attending scheduled-focused conferences over the years and in talking to program management colleagues is the refrain that:

a. It is hard to find a good scheduler, and

b. Constructing a schedule is more of an art than a science.

I can only say that this cedes the field to a small cadre of personnel who perform an essential function, but who do so with few objective tests of effectiveness or accountability–until it is too late.

But the reality is quite different from the fuzzy perception of schedule that is often assumed. All critical path method (CPM) schedules describe the same phenomena, though the lexicon will vary based on the specific proprietary application employed.

In government-focused and large commercial projects, the schedule is heart of planning and execution. In the DoD world it is known as the Integrated Master Schedule (IMS), which utilize the inherent bottom-up relationships of elements to determine the critical path. The main sources regarding the IMS have a great deal of overlap, but tend to be either aspirational (and unfortunately not prescriptive in defining the basic characteristics of an IMS) or reflect the “art over science” approach. For those following along these are the DoD Integrated Master Plan and Integrated Master Schedule Preparation and Use Guide of 21 October 2005, the NAVAIR Integrated Master Schedule (IMS) Guidebook of February 2010, and the NDIA Planning and Scheduling Excellence Guide (PASEG) of 9 March 2016 (unfortunately no current direct link).

The key elements that comprise an IMS, in addition to what we identified under the PMBOK are that it is networked schedule consisting of specific durations that are assigned to specific work tasks that must be accomplished in discrete work packages. In most cases these durations will be derived by some kind of either fixed, manual method or through the inherent optimization algorithm being applied by the CPM application. More on this below. But these work packages are discrete, meaning that they represent the full scope of the work that must be accomplished to during the specified duration for the creation of an end product. Discrete work is distinguished from level of effort (LOE) work, the latter being effort that is always expended, such as administrative and management tasks, that are not directly tied to the accomplishment of an end product.

These work packages are tied together to illustrate antecedent and progressive work that show predecessor and successor relationships. Long term planning activities, which cannot be fleshed out until more immediate work is completed are set aside as placeholders called planning packages. Each of the elements that are tracked in the IMS are based on the presentation of established criteria that define completion, events, and specific accomplishments.

The most comprehensive IMSs consist of detailed planning that include resources and elements of cost.

Detailed Elements of the IMS

Given these general elements, the best source of identifying the key elements of detailed schedules is also found in Department of Defense documents. The core document in this case is the Data Item Description for the IMS numbered as DI-MGMT-81650. The latest one is dated March 30, 2005. There are a minimum of 32 data elements, some of these already mentioned and which I will not repeat in this post since they are pretty well listed and identified in the source document.

For those not familiar with these documents, Data Item Descriptions (or DiDs–gotta love acronyms) represent the detailed technical documents for artifacts involved in the management of DoD-related operations. Thus, this provides us with a pretty good inventory of elements to source. But there are others that are implied.

For example, the 81650 DiD identifies an element known as “methodology.” What this means is that each scheduling application has an optimization engine where the true differences in schedule construction and intellectual property reside. Elements that affect these calculations are time-based, duration-based, float, and slack, and those related to resources.

These time-based elements consist of early start, early finish, late start, late finish. Duration-based elements consist of shortest time, longest time, greatest rank weight. An additional element related to schedule float identifies minimum slack. Resources are further delineated by the greatest work content and the greatest cumulative resource content.

I would note that the NDIA PASEG adds some sub-elements to this list that are based on the algorithmic result of the schedule engines and, thus, tends to ignore the antecedent salient elements of validating the optimization engine found above. These additional sub-elements are total float, free float, soft constraints, hard constraints, and–also found in the aforementioned DiD–program, task, and resource calendars.

Normally, this is where a survey would end–with schedule-specific data elements focused on the details of the schedule. But we’re going to challenge our assumptions a bit more.

Framing Assumptions of Schedules and Programs

The essential document that provides a definition of the term “framing assumption” was published by RAND Corporation in 2014 entitled Identifying Acquisition Framing Assumptions Through Structured Deliberation by Mark V. Arena and Lauren A. Mayer.  The definition of a framing assumption is “any explicit or implicit assumption that is central is shaping cost, schedule, or performance expectations.”

As I have explored in my prior post, the use of the term “cost” is a fuzzy one. To some it means earned value management, which measures a small part of the costs of development and ownership of a system. To others it means total cost of ownership. Schedule is an implicit part of this definition, and then we have performance expectations, which I will deal with in a separate post.

But we can apply the concept of framing assumptions in two ways.

The first applies to the assumed purpose of the schedule. What do we construct one? This goes back to my earlier statement that “…the schedule…ties together all of the disciplines in putting together a project–acquisition, systems engineering, cost estimating, and project performance management.”

For the NDIA PASEG the IMS is a “tool, not just a report” that “provides an ever-changing window into the progress (or lack of it) of current work effort. The strategic mission of the schedule is to point out future risks and opportunities.”

For the NAVAIR IMS Guide the IMS “At a top level…contain(ing) the networked, detailed tasks necessary to ensure successful program execution…” that “capture(s) project tasks and task relationships”, “show(s) the magnitude and how long each task will take”, “show(s) resources, durations, and constraints for each task” and “show(s) the critical path.”

For the DiD 81650 “The Integrated Master Schedule (IMS) is an integrated schedule containing the networked, detailed tasks necessary to ensure successful program execution.”

But the most comprehensive definition that goes to the core of the purpose of an IMS can be found in paragraph 1.2 of the DoD Integrated Master Plan and Integrated Master Schedule Preparation and Use Guide (IMP/IMS Guide). The elements of this purpose is worth transcribing, because if we have a requirement and cannot ask the “So What?” question, that is, if we cannot effectively determine why something must be done, then it probably does not need to be done (or we need to apply rigor in the development our expertise).

For what the IMP/IMS Guide does is clearly tie the schedule to the programmatic framing assumptions (used in the context in which RAND meant it) from initial acquisition through planning. Thus, the Integrated Master Plan (IMP) is firmly established as an antecedent and intermediate planning process (not merely an artifact or tool), that results in the program R&D execution process.

Taken in whole these processes and the resulting artifacts of the processes provide:

a. Provides offerors and acquiring activities with detailed execution planning, organization, and scheduling information that sets realistic expectations for the resulting contract action.

b. Serves as the execution plan for how the supplier will meet the contract’s performance requirements within cost and schedule constraints.

c. Provides a basis for integrating all of the functions involved in development and deployment of the system being acquired and, after award, sets the framing assumptions of the program.

d. Provides the basis for determining and assessing progress, identifying risks, determining the basis for contractual award fees and penalties, assess progress on Key Performance Parameters (KPPs) and Technical Performance Measures (TPMs), determine alternative paths to project completion, and determine opportunities for innovation and new acquisitions not apparent at the time of the award.

What all of this means is that the Integrated Master Schedule is too important to be left to the master scheduler. Yes, the schedule is a “tool” to those at the most basic tactical level in work execution. Yes, it is also an artifact and record.

But, more importantly, it is the comprehensive notional representation of the project’s or program’s scope, effort, progress, and assessment.

Private and Government-focused Industry Practice

A word has to mentioned here about the difference in practice between purely private industry practice in managing large projects and programs, and the skewing in the posts that focus on those industries that focus on public sector acquisition.

In the listing of schedule elements listed earlier there is reference to resources and elements of cost, yet here is an area that standard practice diverges. In private industry the application of resource assignments to specific work is standard practice and found in the IMS.

In companies focused on the public sector and DoD, the practice is to establish a different set of data outside of the schedule to manage resources. Needless to say this creates problems of validation of data across disparate systems related to the lowest level of planning and execution of a project or program. The basis for it, I think, relates to viewing the schedule as a “tool” and not the basis for project execution. This “tool” mindset also allows for separate “earned value engines” that oftentimes do not synchronize with the execution of the schedule, not only undermining the practical value of both, but also creating systems complexity and inefficiency where none need exist.

Another gap found in many areas of public acquisition concerns the development of an integrated master plan antecedent to the integrated master schedule. The cause here, once again, I believe is viewing the discipline of systems engineering separate; one that is somehow walled off from the continuing assessment of program execution, though that assumption is not supported by program phasing and milestone planning and achievement.

From the perspective of Integrated Program/Project Management, these considerations cannot be ignored, and so our inventory of essential data elements must include elements from these practices.

But Wait! There’s More!

Most discussions at conferences and professional meetings will usually stop at this point–viewing cost and schedule integration as the essence of IPM–with “cost’ limited to EVM. Some will add some “oh by the ways” such as technical performance and risk. I will address these in the next post as well.

But there are also other systems and processes that are relevant to our inventory. But what I have covered thus far in this series should challenge you if you have been paying attention.

I tackled cost first because of the assumptions implicit in equating it with EVM, and then went on to demonstrate that there are other elements of cost that provide a more comprehensive view. This is not denigrate the value of EVM, since it is an essential process in project management, but to demonstrate that its analytics are not comprehensive and, as with any complex system, require the contribution of additional information, depending on the level and type of work performance and progress being recorded and assessed.

In this post I have tacked the IMS, and have demonstrated that it is not supplementary process, but central to all other processes and actions being taken in the execution of the project or program. Many times people enter the schedule from an assessment of cost performance–tracing cost drivers to specific schedule activities and then tasks. But this has it backwards, based on the best technology available sometime in the late 1990s.

It is the schedule that brings together all relevant information from our execution and control processes and systems. It seems to me that perhaps the first place one goes is the schedule, that the first element to trace are those related to schedule slippage and unexpected resource consumption, and then to trace these to contract cost impact.

But, of course, there is more–and these other elements may turn out to be of greater consequence than just cost and schedule considerations. More on these in my next post.

In Closing: Battle Rhythm and the Plans of the Day and Week

When I was on active duty in the Navy we planned our days and weeks around a Plan of the Day or Plan of the Week. This is a posted agenda so that the entire ship or command understands the major events that affect its operations. It establishes focus on the main events at hand and fosters communication both laterally and vertically within the chain of command.

As one rises in rank and responsibility it is important to understand the operational tempo of the unit or ship, its systems, and subsystems. This is important in avoiding crisis management.This is known as Battle Rhythm.

Baked into the schedule (assuming proper construction and effective integrated product teaming) are the major events, milestones, and expected achievement of the program or project. Thus, there are events that should be planned around and anticipation of these items on a daily, weekly, biweekly, monthly, quarterly, and major milestone basis.

Given an effective battle rhythm, a PM should never complain about performance and progress indicators “looking into the rear view mirror”. If that is the case then perhaps the PM should look at the effectiveness and timeliness of the underlying project and program systems.Thus, when a PMO complains of information and intelligence being too late to be actionable, it is actually describing a condition of ineffective, latent, and disjointed information and intelligence systems.

Thus, our next step in our next post is to identify more salient IPM elements that cut to the heart of the matter.

When You’re a Jet You’re a Jet all the Way — Software as a Change Agent for Professional Development

Earlier in the week Dave Gordon at his blog responded to my post on data normalization and rightly introduced the need for data rationalization.  I had omitted the last concept in my own post, but strongly implied that the two were closely aligned in my broad definition of normalization beyond the boundaries of eliminating redundancies.  In the end in thinking about this, I prefer Dave’s dichotomy because it more clearly defines what we are doing.

Later in the week I found myself elaborating on these issues in discussions with customers and other professionals in the project management discipline.  In the projects in which I am involved, what I have found is that the process of normalizing and rationalizing data, even historical data which, contrary to Dave’s assertion can be maintained–at least in my business–acts as a change agent in defining the agnostic characteristics what defines the type of data being normalized and rationalized.

What I mean here is that, for instance, a time-phased CPM schedule that eventually becomes an integrated master schedule has an analogue.  For years we have been told, mostly by marketing types working for software manufacturers, that there is a secret sauce that they provide that cannot be reconciled against their competitors.  As a result, entire professional organizations, conferences, white papers, and presentations have been given to prove this assertion.  When looking at the data, however, the assertion is invalid.

The key differentiator between CPM scheduling applications is the optimization engine.  That is the secret sauce and the black box where the valuable IP lies.  It is the algorithms in the optimization that identifies for us those schedule activities that are on the critical and near critical paths.  But when you run these engines side by side on the same schedule, their results are well within one standard deviation of one another.

Keep in mind that I’m talking about differences in data related to normalization and rationalization and whether these differences can be reconciled.  There are other differences in features between the applications that do make a difference in their use and functionality: whether they can lock down a baseline, manage multiple baselines, prevent future work from being planned and executed in the past (yes, this happens), handle hammocks, scale properly, etc.  Because of these functional differences the same data may have been given a different value in the table or the file.  As Dave Gordon rightly points out, reconciling what on the surface are irreconcilable values requires specialized knowledge.  Well, if you have that specialized knowledge then you can achieve what otherwise seems impossible.  Once you achieve this “impossible” feat, it quickly becomes apparent that the features and functions involved are based on a very limited number of values that are common across CPM scheduling applications.

This should not be surprising for those of you out there that have been doing this a long time.  Back in the 1980s we would use visual display boards to map out short segments of the schedule.  We would manually construct schedules in very rudimentary (by today’s standards) mainframe computers and get very long dot matrix representations of the schedule to tape to the “War Room” walls.  The resources, risks, etc. had to be drawn on the schedule.  This manual process required an understanding of CPM schedule construction similar to someone still using long division today.  There was actually a time when people had to memorize their log and square root tables.  it was not very efficient, but the deep understanding of the analogue schedule has since been lost with the introduction of new technology.  This came to mind when I saw on LinkedIn a question of the types of questions that should be asked of a master scheduler in an interview.

As a result of new technology, schedulers aligned themselves into camps based on the application they selected or was selected for them in their job.  Over time I have seen brand loyalty turn into partisanship.  Once again, this should not be surprising.  If you spent ten years of your career on a very popular scheduling application, anything that may undermine one’s investment in that choice–and which makes employment possible–will be deemed as a threat.

I first came upon this behavior years ago when I was serving as CIO for a project management organization.  Some PMOs could not share information–not even e-mail and documents–because most were using PCs and some were using Macs.  Problem was that the key PMO was using Mac.  This was before Microsoft and Apple got together and solved this for us.  Needless to say this undermined organizational effectiveness.  My attempt to get everyone on the same page in terms of operating system compatibility sparked a significant backlash.  Luckily for me Microsoft soon introduced its first solution to address this issue.  So, in the end, the “Macintites”, as we good naturedly called them, could use their Macs for business common to other parts of the organization.

This almost cultish behavior finds itself in new places today: in the iPhone and Droid wars, in the use of Agile, and among CPM scheduling application partisans.  It is true that those of us in the software industry certainly want to see brand loyalty.  It is one of the key measures of success in proving the product’s value and effectiveness.  But it need not undermine the fact that a scheduler is a key specialist in the project management discipline.  If you are a Jet, you don’t need to be a Jet all the way.

Since creating generic analogues of schedules from submitted third-party data, I have found that insights into project performance can be noted that previously were not available.  The power of digitization along with normalization and rationalization allows the data to be effectively integrated at the proper point of intersection with other dimensions of project performance such as cost performance and risk.  Freed from the shackles of having to learn the specific idiosyncrasies of particular applications, the deep understanding of scheduling is being reintroduced.  This is a long time in coming.

Margin Call — Schedule Margin and Schedule Risk

A discussion at the LinkedIn site for the NDIA IPMD regarding schedule margin has raised some good insight and recommendations for this aspect of project planning and execution.  Current guidance from the U.S. Department of Defense for those engaged in the level of intense project management that characterizes the industry has been somewhat vague and open to interpretation.  Some of this, I think, is due to the competing proprietary lexicon from software manufacturers that have been dominant in the industry.

But mostly the change in defining this term is due to positive developments.  That is, the change is due to the convergence garnered from long experience among the various PM disciplines that allow us to more clearly define and distinguish between schedule margin, schedule buffer, schedule contingency, and schedule reserve.  It is also due to the ability of more powerful software generations to actually apply the concept in real planning without it being a thumb in the air-type exercise.

Concerning this topic, Yancy Qualls of Bell Helicopter gave an excellent presentation at the January NDIA IPMD meeting in Tucson.  His proposal makes a great deal of sense and, I think, is a good first step toward true integration and a more elegant conceptual solution.  In his proposal, Mr. Qualls clearly defines the scheduling terminology by drawing analogies to similar concepts on the cost side.  This construction certainly overcomes a lot of misconceptions about the purpose and meaning of these terms.  But, I think, his analogies also imply something more significant and it is this:  that there is a common linkage between establishing management reserve and schedule reserve, and there are cost/schedule equivalencies that also apply to margin, buffer, and contingency.

After all, resources must be time-phased and these are dollarized.  But usually the relationship stops there and is distinguished by that characteristic being measured: measures of value or measures of timing; that is, the value of the work accomplished against the Performance Management Baseline (PMB) is different from the various measures of progress recorded against the Integrated Master Schedule (IMS).  This is why we look at both cost and schedule variances on the value of work performed from a cost perspective, and physical accomplishment against time.  These are fundamental concepts.

To date, the most significant proposal advanced to reconcile the two different measures was put forth by Walt Lipke of Oklahoma City Air Logistics Center in the method known as earned schedule.  But the method hasn’t been entirely embraced.  Studies have shown that it has its own limitations, but that it is a good supplement those measures currently in use, not a substitute for them.

Thus, we are still left with the need of making a strong, logical, and cohesive connection between cost and schedule in our planning.  The baseline plans constructed for both the IMS and PMB do not stand apart or, at least, should not.  They are instead the end result of a continuum in the construction of our project systems.  As such, there should be a tie between cost and schedule that allows us to determine the proper amount of margin, buffer, and contingency in a manner that is consistent across both sub-system artifacts.

This is where risk comes in and the correct assessment of risk at the appropriate level of measurement, given that our measures of performance are being measured against different denominators.  For schedule margin, in Mr. Qualls’ presentation, it is the Schedule Risk Analysis (SRA).  But this then leads us to look at how that would be done.

Fortuitously, during this same meeting, Andrew Uhlig of Raytheon Missile Systems gave an interesting presentation on historical SRA results, building models from such results, and using them to inform current projects.  What I was most impressed with in this presentation was that his research finds that the actual results from schedule performance do not conform to any of the usual distribution curves found in the standard models.  Instead of normal, triangle, or pert distributions, what he found is a spike, in which a large percentage of the completions fell exactly on the planned duration.  Thus, distribution was skewed around the spike, with the late durations–the right tail–much longer than the left.

What is essential about the work of Mr. Uhlig is that, rather than using small samples with their biases, he using empirical data to inform his analysis.  This is a pervasive problem in project management.  Mr. Qualls makes this same point in his own presentation, using the example of the Jordan-era Chicago Bulls as an example, where each subsequent win–combined with probabilities that show that the team could win all 82 games–does not mean that they will actually perform the feat.  In actuality (and in reality) the probability of this occurring is quite small.  Glen Alleman at his Herding Cats blog covers this same issue, emphasizing the need for empirical data.

The results of the Uhlig presentation are interesting, not only because they throw into question the results using the three common distributions used in schedule risk analysis under simulated Monte Carlo, but also because they may suggest, in my opinion, an observation or reporting bias.  Discrete distribution methods, as Mr. Uhlig proposes, will properly model the distribution for such cases using our parametric analysis.  But they will not reflect the quality of the data collected.

Short duration activities are designed to overcome subjectivity through their structure.  The shorter the duration, the more discrete the work being measured, the less likely occurrence of “gaming” the system.  But if we find, as Mr. Uhlig does, that 29% of 20 day activities report exactly 20 days, then there is a need to test the validity of the spike itself.  It is not that it is necessarily wrong.  Perhaps the structure of the short duration combined with the discrete nature of the linkage to work has done its job.  One would expect a short tail to the left and a long tail to the right of the spike.  But there is also a possibility that variation around the target duration is being judged as “close enough” to warrant a report of completion at day 20.

So does this pass the “So What?” test?  Yes, if only because we know that the combined inertia of all of the work performed at any one time on the project will eventually be realized in the form of a larger amount of risk in proportion to the remaining work.  If the reported results are pushing risk to the right because the reported performance is optimistic against the actual performance, then we will get false positives.  If the actual performance is pessimistic against actual performance–a less likely scenario in my opinion–then we will get false negatives.

But regardless of these further inquiries that I think need to be made regarding the linkage between cost and schedule, and the validity of results from SRAs, we now have two positive steps in the right direction in clarifying areas that in the past have perplexed project managers.  Properly identifying schedule reserve, margin, buffer, and contingency, combined with properly conducting SRAs using discrete distributions based on actual historical results will go quite far in allowing us to introduce better predictive measures in project management.

IMPish Grin — The Connection for Technical Measures (and everything else)

Glen Alleman at his Herding Cats blog has posted his presentation on the manner of integrating technical performance measures in a cohesive and logical manner with project schedule and cost measurement.  Many in the DoD and A&D-focused project community are aware of the work of many of us in this area (my own paper is posted on the College of Performance Management library page here) but the work of Alleman, Coonce, and Price take these concepts a step further.  I wrote an earlier post about the white paper but the presentation demonstrates clearly the flow of logic in constructing not only a model in which technical performance is incorporated into the project plan through measures of effectiveness that are derived from the statement of work, but then makes the connection to measures of progress and measures of performance, clearly outlining the proper integration of the core elements of project planning, execution, and control.

The key artifact that ties the essential elements together: cost, schedule, and technical performance; is the Integrated Master Plan (IMP).  The National Defense Industrial Association Integrated Program Management Division’s (NDIA IPMD) Planning and Scheduling Excellence Guide (PASEG) (link broken) seems to forget this essential step–the artifact that is necessary to allow for the construction of a valid Integrated Master Schedule (IMS).  I think part of the reason for the omission is the mistaken belief that this is an unnecessary artifact–that it is a “nice to have” if the program sponsor remembers to put it in the contact deliverables.  This makes its construction negotiable and vulnerable as a discretionary cost item, which it clearly is not–or, at least, should not be.  Even the Wikipedia entry is confused by the classification of the IMP, characterizing it first as primarily a DoD-specific artifact, a contractual artifact, and–oh, by the way–a necessary step in civic and urban planning (also known as construction project management).  The PASEG does mention summary schedules and perhaps in those rare cases, based on the work being performed and the contract type, some stripped down kind of IMP will do, but regardless of what it is called (and IMP still serves as a good shorthand) then the ability to trace measures of effectiveness to measures of progress is still needed in complex project management.

Thus the IMP is this: it is the fulcrum of integrated project management.  When tying the measures of effectiveness to specific tasks related to the WBS, it is the IMP that provides the roadmap to the working, day-to-day tools that will be used to measure progress–cost, schedule, and technical achievement and assessment against plan, all informed by risk.  For those of us in the technology community, continuing to sell discrete, swim-lane focused apps that do not support this construction are badly out of date.