Technical Foul — It’s Time for TPI in EVM

For more than 40 years the discipline of earned value management (EVM) has gone through a number of changes in its descriptions, governance, and procedures.  During that same time its community has been resistant to improvements in its methodology or to changes that extend its value when taking into account other methods that either augment its usefulness, or that potentially provide more utility in the area of performance management.  This has been especially the case where it is suggested that EVM is just one of many methodologies that contribute to this assessment under a more holistic approach.

Instead, it has been asserted that EVM is the basis for integrated project management.  (I disagree–and solely on the evidence that if it was so, then project managers would more fully participate in its organizations and conferences.  This would then pose the problem that PMs might then propose changes to EVM that, well…default to the second sentence in this post).  As evidence it need only be mentioned that there has been resistance to such recent developments in using earned schedule, technical performance, and risk–most especially risk based on Bayesian analysis).

Some of this resistance is understandable.  First, it took quite a long time just to get to a consensus on the application of EVM, though its principles and methods are based on simple and well proven statistical methods.  Second, the industries in which EVM has been accepted are sensitive to risk, and so a bureaucracy of practitioners have grown to ensure both consensus and compliance to accepted methods.  Third, the community that makes up practitioners of EVM consist mostly of cost analysts, trained in simple accounting, arithmetic, and statistical methodology.  It is thus a normal human bias to assume that the path of one’s previous success is the way to future success, though our understanding of the design space (reality) that we inhabit has been enhanced through new knowledge.  Fourth, there is a lot of data that applies to project management, and the EVM community is only now learning of the ways that this other data impacts our understanding of measuring project performance and the probability of reaching project goals in rolling out a product.  Finally, there is the less defensible reason that a lot of people and firms have built their careers that depends on maintaining the status quo.

Our ability to integrate disparate datasets is accelerating on a yearly basis thanks to digital technology, and the day in achieving integration of all relevant factors in project and enterprise performance is inevitable.  To be frank, I am personally engaged in such projects and am assisting organizations in moving in this direction today.  Regardless, we can make an advance in the discipline of performance management by pulling down low hanging fruit.  The most reachable one, in my opinion, is technical performance measurement.

The literature of technical performance has come quite a long way, thanks largely to the work of the Institute for Defense Analyses (IDA) and others, particularly the National Defense Industrial Association through the publication of their predictive measures guide.  This has been a topic of interest to me since its study was part of my duties back when I was still wearing a uniform.  The early results of these studies resulted in a paper that proposed a method of integrating technical performance, earned value, and risk.  A pretty comprehensive overview of the literature and guidance for technical performance can be found at this presentation by Glen Alleman and Tom Coonce given at EVM World in 2015.  It must be mentioned that Rick Price of Lockheed Martin also contributed greatly to this literature.

Keep in mind what is meant when we decide to assess technical performance within the context of R&D.  It is an assessment against expected or specified:

a.  Measures of Effectiveness (MoE)

b.  Measures of Performance (MoP), and

c.  Key Performance Parameters (KPP)

The opposition from the project management community to widespread application of this methodology took two forms.  First, it was argued, the method used to adjust the value of earned (CPI) seemed always to have a negative impact.  Second, there are technical performance factors that transcend the WBS, and so it is hard to properly adjust the individual control accounts based on the contribution of technical performance.  Third, some performance measures defy an assessment of value in a time-phased manner.  The most common example has been tracking weight of aircraft, which has contributors from virtually all components that go into it.

Let’s take these in order.  But lest one think that this perspective is an artifact from 1997, just a short while ago, in the A&D community, the EVM policy office at DoD attempted to apply a somewhat modest proposal of ensuring that technical performance was included as an element in EVM reporting.  Note that the EIA 748 standard states this clearly and has done so for quite some time.  Regardless, the same three core objections were raised in comments from the industry.  Thus, this caused me to ask some further in-depth questions and my revised perspective follows below.

The first condition occurred, in many cases, due to optimism bias in registering earned value, which often occurs when using a single point estimate of percent complete by a limited population of experts contributing to an assessment of the element.  Fair enough, but as you can imagine, its not a message that a PM wants to hear or will necessarily accept or admit, regardless of the merits.  There are more than enough pathways to second guessing and testing selection bias at other levels of reporting.  Glen Alleman in his Herding Cats blog post of 12 August has a very good post listing the systemic reasons for program failure.

Another factor is that the initial methodology did possess a skewing toward more pessimistic results.  This was not entirely apparent at the time because the statistical methods applied did not make that clear.  But, to critique that first proposal, which was the result of contributions from IDA and other systems engineering technical experts, the 10-50-90 method in assessing probability along the bandwidth of the technical performance baseline was too inflexible.  The graphic that we proposed is as follows and one can see that, while it was “good enough”, if rolled up there could be some bias that required adjustment.

TPM Graphic

 

Note that this range around 50% can be interpreted to be equivalent to the bandwidth found in the presentation given by Alleman and Coonce (as well as the Predictive Measures Guide), though the intent here was to perform an assessment based on a simplified means of handicapping the handicappers–or more accurately, performing a probabilistic assessment on expert opinion.  The method of performing Bayesian analysis to achieve this had not yet matured for such applications, and so we proposed a method that would provide a simple method that our practitioners could understand that still met the criteria of being a valid approach.  The reason for the difference in the graphic resides in the fact that the original assessment did not view this time-phasing as a continuous process, but rather an assessment at critical points along the technical baseline.

From a practical perspective, however, the banding proposed by Alleman and Coonce take into account the noise that will be experienced during the life cycle of development, and so solves the slight skewing toward pessimism.  We’ll leave aside for the moment how we determine the bands and, thus, acceptable noise as we track along our technical baseline.

The second objection is valid only so far as any alignment of work-related indicators vary from project to project.  For example, some legs of the WBS tree go down nine levels and others go down five levels, based on the complexity of the work and the organizational breakdown structure (OBS).  Thus where we peg within each leg of the tree the control account (CA) and work package (WP) level becomes relative.  Do the schedule activities have a one-to-one relationship or many-to-one relationship with the WP level in all legs?  Or is the lowest level that the alignment can be made in certain legs at the CA level?

Given that planning begins with the contract spec and (ideally) proceed from IMP –> IMS –> WBS –> PMB in a continuity, then we will be able to determine the contributions of TPM to each WBS element at their appropriate level.

This then leads us to another objection, which is that not all organizations bother with developing an IMP.  That is a topic for another day, but whether such an artifact is created formally or not, one must achieve in practice the purpose of the IMP in order to get from contract spec to IMS under a sufficiently complex effort to warrant CPM scheduling and EVM.

The third objection is really a child of the second objection.  There very well may be TPMs, such as weight, with so many contributors that distributing the impact would both dilute the visibility of the TPM and present a level of arbitrariness in distribution that would render its tracking useless.  (Note that I am not saying that the impact cannot be distributed because, given modern software applications, this can easily be done in an automated fashion after configuration.  My concern is in regard to visibility on a TPM that could render the program a failure).  In these cases, as with other indicators that must be tracked, there will be high level programmatic or contract level TPMs.

So where do we go from here?  Alleman and Coonce suggest adjusting the formula for BCWP, where P is informed by technical risk.  The predictive measures guide takes a similar approach and emphasizes the systems engineering (SE) domain in getting to an assessment to determine the impact of reported EVM element performance.  The recommendation of the 1997 project that I headed in assignments across Navy and OSD, was to inform performance based on a risk assessment of probable achievement at each discrete performance milestone.  What all of these studies have in common, and in common with common industry practice using SE principles, is an intermediate assessment, informed by risk, of a technical performance index against a technical performance baseline.

So let’s explore this part of the equation more fully.

Given that we have MoE, MoP, and KPP are identified for the project, different methods of determining progress apply.  This can be a very simplistic set of TPMs that, through the acquisition or fabrication of compliant materials, meet contractual requirements.  These are contract level TPMs.  Depending on contract type, achievement of these KPPs may result in either financial penalties or financial reward.  Then there are the R&D-dependent MoEs, MoPs, and KPPs that require more discrete time-phasing and ties to the physical completion of work documented by through the WBS structure.  As with EVM on the measurement of the value of work, our index of physical technical achievement can be determined through various methods: current EVM methods, simulated Monte Carlo technical risk, 10-50-90 risk assessment, Bayesian analysis, etc.  All of these methods are designed to militate against selection bias and the inherent limitations of limited sample size and, hence, extreme subjectivity.  Still, expert opinion is a valid method of assessment and (in cases where it works) better than a WAG or coin flip.

Taken together these TPMs can be used to determine the technical achievement of the project or program over time, with a financial assessment of the future work needed to bring it in line.  These elements can be weighted, as suggested by Coonce, Alleman, and Price, through an assessment of relative risk to project success.  Some of these TPIs will apply to particular WBS elements at various levels (since their efforts are tied to specific activities and schedules via the IMS), and the most important project and program-level TPMs are reflected at that level.

What about double counting?  A comparison of the aggregate TPIs and the aggregate CPI and SPI will determine the fidelity of the WBS to technical achievement.  Furthermore, a proper baseline review will ensure that double counting doesn’t occur.  If the element can be accounted for within the reported EVM elements, then it need not be tracked separately by a TPI.  Only those TPMs that cannot be distributed or that represent such overarching risk to project success need be tracked separately, with an overall project assessment made against MR or any reprogramming budget available that can bring the project back into spec.

My last post on project management concerned the practices at what was called Google X.  There incentives are given to teams that identify an unacceptably high level of technical risk that will fail to pay off within the anticipated planning horizon.  If the A&D and DoD community is to become more nimble in R&D, it needs the necessary tools to apply such long established concepts such as Cost-As-An-Independent-Variable (CAIV), and Agile methods (without falling into the bottomless pit of unsupported assertions by the cult such as elimination of estimating and performance tracking).

Even with EVM, the project and program management community needs a feel for where their major programmatic efforts are in terms of delivery and deployment, in looking at the entire logistics and life cycle system.  The TPI can be the logic check of whether to push ahead, or finishing the low risk items that are remaining in R&D to move to first item delivery, or to take the lessons learned from the effort, terminate the project, and incorporate those elements into the next generation project or related components or systems.  This aligns with the concept of project alignment with framing assumptions as an early indicator of continued project investment at the corporate level.

No doubt, existing information systems, many built using 1990s technology and limited to line-and-staff functionality, do not provide the ability to do this today.  Of course, these same systems do not take into account a whole plethora of essential information regarding contract and financial management: from the tracking of CLINs/SLINs, to work authorization and change order processing, to the flow of funding from TAB to PMB/MR and from PMB to CA/UB/PP, contract incentive threshold planning, and the list can go on.  What this argues for is innovation and rewarding those technology solutions that take a more holistic approach to project management within its domain as a subset of program, contract, and corporate management–and such solutions that do so without some esoteric promise of results at some point in the future after millions of dollars of consulting, design, and coding.  The first company or organization that does this will reap the rewards of doing so.

Furthermore, visibility equals action.  Diluting essential TPMs within an overarching set of performance metrics may have the effect of hiding them and failing to properly identify, classify, and handle risk.  Including TPI as an element at the appropriate level will provide necessary visibility to get to the meat of those elements that directly impact programmatic framing assumptions.

River Deep, Mountain High — A Matrix of Project Data

Been attending conferences and meetings of late and came upon a discussion of the means of reducing data streams while leveraging Moore’s Law to provide more, better data.  During a discussion with colleagues over lunch they asked if asking for more detailed data would provide greater insight.  This led to a discussion of the qualitative differences in data depending on what information is being sought.  My response to more detailed data was to respond: “well there has to be a pony in there somewhere.”  This was greeted by laughter, but then I finished the point: more detailed data doesn’t necessarily yield greater insight (though it could and only actually looking at it will tell you that, particularly in applying the principle of KDD).  But more detailed data that is based on a hierarchical structure will, at the least, provide greater reliability and pinpoint areas of intersection to detect areas of risk manifestation that is otherwise averaged out–and therefore hidden–at the summary levels.

Not to steal the thunder of new studies that are due out in the area of data later this spring but, for example, I am aware after having actually achieved lowest level integration for extremely complex projects through my day job, that there is little (though not zero) insight gained in predictive power between say, the control account level of a WBS and the work package level.  Going further down to element of cost may, in the words of the character in the movie Still Alice, where “You may say that this falls into the great academic tradition of knowing more and more about less and less until we know everything about nothing.”  But while that may be true for project management, that isn’t necessarily so when collecting parametrics and auditing the validity of financial information.

Rolling up data from individually detailed elements of a hierarchy is the proper way to ensure credibility.  Since we are at the point where a TB of data has virtually the same marginal cost of a GB of data (which is vanishingly small to begin with), then the more the merrier in eliminating the abuse associated with human-readable summary reporting.  Furthermore, I have long proposed through this blog and elsewhere, that the emphasis should be away from people, process, and tools, to people, process, and data.  This rightly establishes the feedback loop necessary for proper development and project management.  More importantly, the same data available through project management processes satisfy the different purposes of domains both within the organization, and of multiple external stakeholders.

This then leads us to the concept of integrated project management (IPM), which has become little more than a buzz-phrase, and receives a lot of hand waves, mostly by technology companies that want to push their tools–which are quickly becoming obsolete–while appearing forward leaning.  This tool-centric approach is nothing more than marketing–focusing on what the software manufacturer would have us believe is important based on the functionality baked into their applications.  One can see where this could be a successful approach, given the emphasis on tools in the PM triad.  But, of course, it is self-limiting in a self-interested sort of way.  The emphasis needs to be on the qualitative and informative attributes of available data–not of tool functionality–that meet the requirements of different data consumers while minimizing, to the extent possible, the number of data streams.

Thus, there are at least two main aspects of data that are important in understanding the utility of project management: early warning/predictiveness and credibility/traceability/fidelity.  The chart attached below gives a rough back-of-the-envelope outline of this point, with some proposed elements, though this list is not intended to be exhaustive.

PM Data Matrix

PM Data Matrix

In order to capture data across the essential elements of project management, our data must demonstrate both a breadth and depth that allows for the discovery of intersections of the different elements.  The weakness in the two-dimensional model above is that it treats each indicator by itself.  But, when we combine, for example, IMS consecutive slips with other elements listed, the informational power of the data becomes many times greater.  This tells us that the weakness in our present systems is that we treat the data as a continuity between autonomous elements.  But we know that the project consists of discontinuities where the next level of achievement/progress is a function of risk.  Thus, when we talk about IPM, the secret is in focusing on data that informs us what our systems are doing.  This will require more sophisticated types of modeling.

Technical Ecstacy — Technical Performance and Earned Value

As many of my colleagues in project management know, I wrote a series of articles on the application of technical performance risk in project management back in 1997, one of which made me an award recipient from the institution now known as Defense Acquisition University.  Over the years various researchers and project organizations have asked me if I have any additional thoughts on the subject and the response up until now has been: no.  From a practical standpoint, other responsibilities took me away from the domain of determining the best way of recording technical achievement in complex projects.  Furthermore, I felt that the field was not ripe for further development until there were mathematics and statistical methods that could better approach the behavior of complex adaptive systems.

But now, after almost 20 years, there is an issue that has been nagging at me since publication of the results of the project studies that I led from 1995 through 1997.  It is this: the complaint by project managers in resisting the application of measuring technical achievement of any kind, and integrating it with cost performance, the best that anyone can do is 100%.  “All TPM can do is make my performance look worse”, was the complaint.  One would think this observation would not only not face opposition, especially from such an engineering dependent industry, but also because, at least in this universe, the best you can do is 100%.*  But, of course, we weren’t talking about the same thing and I have heard this refrain again at recent conferences and meetings.

To be honest, in our recommended solution in 1997, we did not take things as far as we could have.  It was always intended to be the first but not the last word regarding this issue.  And there have been some interesting things published about this issue recently, which I noted in this post.

In the discipline of project management in general, and among earned value practitioners in particular, the performance being measured oftentimes exceeds 100%.  But there is the difference.  What is being measured as exceeding 100% is progress against both a time-based and fiscally-based linear plan.  Most of the physical world doesn’t act nor can it be measured this way.  When measuring the attributes of a system or component against a set of physical or performance thresholds, linearity against a human-imposed plan oftentimes goes out the window.

But a linear progression can be imposed on the development toward the technical specification.  So then the next question is how do we measure progress during the development curve and duration.

The short answer, without repeating a summarization of the research (which is linked above) is through risk assessment, and the method that we used back in 1997 was a distribution curve that determined the probability of reaching the next step in the technical development.  This was based on well-proven systems engineering techniques that had been used in industry for many years, particularly at pre-Lockheed Martin Martin Marietta.  Technical risk assessment, even using simplistic 0-50-80-100 curves, provides a good approximation of probability and risk between each increment of development, though now there are more robust models.  For example, the use of Bayesian methodology, which introduces mathematical rigor into statistics, as outlined in this post by Eliezer Yudkowsky.  (As an aside, I strongly recommend his blogs for anyone interested in the cutting edge of rational inquiry and AI).

So technical measurement is pretty well proven.  But the issue that then presents itself (and presented itself in 1997) was how to derive value from technical performance.  Value is a horse of a different color.  The two bugaboos that were presented as being impassible roadblocks were weight and test failure.

Let’s take weight first.  On one of my recent trips I found myself seated in an Embraer E-jet.  These are fairly small aircraft, especially compared to conventional commercial aircraft, and are lightweight.  As such, they rely on a proper distribution and balance of weight, especially if one finds oneself at 5,000 feet above sea level with the long runway shut down, a 10-20 mph crosswind, and a mountain range rising above the valley floor in the direction of takeoff.  So the flight crew, when the cockpit noted a weight disparity, shifted baggage from belly stowage to the overhead compartments in the main cabin.  What was apparent is that weight is not an ad hoc measurement.  The aircraft’s weight distribution and tolerances are documented–and can be monitored as part of operations.

When engineering an aircraft, each component is assigned its weight.  Needless to say, weight is then allocated and measured as part of the development of subsystems of the aircraft.  One would not measure the overall weight of the aircraft or end item without ensuring that the components and subsystems did not conform to the weight limitations.  The overall weight limitation of an aircraft will very depending on mission and use.  If a commercial-type passenger airplane built to takeoff and land from modern runways, weight limitations are not as rigorous.  If the aircraft in question is going to takeoff and land from a carrier deck at sea then weight limitations become more critical.  (Side note:  I also learned these principles in detail while serving on active duty at NAS Norfolk and working with the Navy Air Depot there).  Aside from aircraft weight is important in a host of other items–from laptops to ships.  In the latter case, of which I am also intimately familiar, weight is important in balancing the ship and its ability to make way in the water (and perform its other missions).

So given that weight is an allocated element of performance within subsystem or component development, we achieve several useful bits of information.  First off, we can aggregate and measure weight of the entire end item to track if we are meeting the limitations of the item.  Secondly, we can perform trade-off.  If a subsystem or component can be made with a lighter material or more efficiently weight-wise, then we have more leeway (maybe) somewhere else.  Conversely, if we need weight for balance and the component or subsystem is too light, we need to figure out how to add weight or ballast.  So measuring and recording weight is not a problem. Finally, we allocate and tie performance-wise a key technical specification to the work, avoiding subjectivity.

So how to do we show value?  We do so by applying the same principles as any other method of earned value.  Each item of work is covered by a Work Breakdown Structure (WBS), which is tied (hopefully) to an Integrated Master Schedule (IMS).  A Performance Management Baseline (PMB) is applied to the WBS (or sometimes thought a resource-loaded IMS).  If we have properly constructed our Integrated Management Plan (IMP) prior to the IMS, we should clearly have tied the relationship of technical measures to the structure.  I acknowledge that not every program performs an IMP, but stating so is really an acknowledgement of a clear deficiency in our systems, especially involving complex R&D programs.  Since our work is measured in short increments against a PMB, we can claim 100% of a technical specification but be ahead of plan for the WBS elements involved.

It’s not as if the engineers in our industrial activities and aerospace companies have never designed a jet aircraft or some other item before.  Quite a bit of expertise and engineering know-how transfers from one program to the next.  There is a learning curve.  The more information we collect in that regard, the more effective that curve.  Hence my emphasis in recent posts on data.

For testing, the approach is the same.  A test can fail, that is, a rocket can explode on the pad or suffer some other mishap, but the components involved will succeed or fail based on the after-action report.  At that point we will know, through allocation of the test results, where we are in terms of technical performance.  While rocket science is involved in the item’s development, recording technical achievement is not rocket science.

Thus, while our measures of effectiveness, measures of performance, measures of progress, and technical performance will determine our actual achievement against a standard, our fiscal assessment of value against the PMB can still reflect whether we are ahead of schedule and below budget.  What it takes is an understanding of how to allocate more rigorous measures to the WBS that are directly tied to the technical specifications.  To do otherwise is to build a camel when a horse was expected or–as has been recorded in real life in previous programs–to build a satellite that cannot communicate, a Navy aircraft that cannot land on a carrier deck, a ship that cannot fight, and a vaccine that cannot be delivered and administered in the method required.  We learn from our failures, and that is the value of failure.

 

*There are colloquial expressions that allow for 100% to be exceeded, such as exceeding 100% of the tolerance of a manufactured item or system, which essentially means to exceed its limits and, therefore, breaking it.

Days of Future Passed — Legacy Data and Project Parametrics

I’ve had a lot of discussions lately on data normalization, including being asked the question of what constitutes normalization when dealing with legacy data, specifically in the field of project management.  A good primer can be found at About.com, but there are also very good older papers out on the web from various university IS departments.  The basic principals of data normalization today consist of finding a common location in the database for each value, reducing redundancy, properly establishing relationships among the data elements, and providing flexibility so that the data can be properly retrieved and further processed into intelligence in such as way as the objects produced possess significance.

The reason why answering this question is so important is because our legacy data is of such a size and of such complexity that it falls into the broad category of Big Data.  The condition of the data itself provides wide variations in terms of quality and completeness.  Without understanding the context, interrelationships, and significance of the elements of the data, the empirical approach to project management is threatened, since being able to use this data for purposes of establishing trends and parametric analysis is limited.

A good paper that deals with this issue was authored by Alleman and Coonce, though it was limited to Earned Value Management (EVM).  I would argue that EVM, especially in the types of industries in which the discipline is used, is pretty well structured already.  The challenge is in the other areas that are probably of more significance in getting a fuller understanding of what is happening in the project.  These areas of schedule, risk, and technical performance measures.

In looking at the Big Data that has been normalized to date–and I have participated with others in putting a significant dent in this area–it is apparent that processes in these other areas lack discipline, consistency, completeness, and veracity.  By normalizing data in sub-specialties that have experienced an erosion in enforcing standards of quality and consistency, technology becomes a driver for process improvement.

A greybeard in IT project management once said to me (and I am not long in joining that category): “Data is like water, the more it flows downstream the cleaner it becomes.”  What he meant is that the more that data is exposed in the organizational stream, the more it is questioned and becomes a part of our closed feedback loop: constantly being queried, verified, utilized in decision making, and validated against reality.  Over time more sophisticated and reliable statistical methods can be applied to the data, especially if we are talking about performance data of one sort or another, that takes periodic volatility into account in trending and provides us with a means for ensuring credibility in using the data.

In my last post on Four Trends in Project Management, I posited that the question wasn’t more or less data but utilization of data in a more effective manner, and identifying what is significant and therefore “better” data.  I recently heard this line repeated back to me as a means of arguing against providing data.  This conclusion was a misreading of what I was proposing.  One level of reporting data in today’s environment is no more work than reporting on any other particular level of a project hierarchy.  So cost is no longer a valid point for objecting to data submission (unless, of course, the one taking that position must admit to the deficiencies in their IT systems or the unreliability of their data).

Our projects must be measured against the framing assumptions in which they were first formed, as well as the established measures of effectiveness, measures of performance, and measures of technical achievement.  In order to view these factors one must have access to data originating from a variety of artifacts: the Integrated Master Schedule, the Schedule and Cost Risk Analysis, and the systems engineering/technical performance plan.  I would propose that project financial execution metrics are also essential in getting a complete, integrated, view of our projects.

There may be other supplemental data that is necessary as well.  For example, the NDIA Integrated Program Management Division has a proposed revision to what is known as the Integrated Baseline Review (IBR).  For the uninitiated, this is a process in which both the supplier and government customer project teams can come together, review the essential project artifacts that underlie project planning and execution, and gain a full understanding of the project baseline.  The reporting systems that identify the data that is to be reported against the baseline are identified and verified at this review.  But there are also artifacts submitted here that contain data that is relevant to the project and worthy of continuing assessment, precluding manual assessments and reviews down the line.

We don’t yet know the answer to these data issues and won’t until all of the data is normalized and analyzed.  Then the wheat from the chaff can be separated and a more precise set of data be identified for submittal, normalized and placed in an analytical framework to give us more precise information that is timely so that project stakeholders can make decisions in handling any risks that manifest themselves during the window that they can be handled (or make the determination that they cannot be handled).  As the farmer says in the Chinese proverb:  “We shall see.”

Over at AITS.org — Red Queen Race: Why Fast Tracking a Project is Not in Your Control

“Well, in our country,” said Alice, still panting a little, “you’d generally get to somewhere else—if you run very fast for a long time, as we’ve been doing.”

“A slow sort of country!” said the Queen. “Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast, as that!”Through the Looking-Glass and What Alice Found There, Chapter 2, Lewis Carroll

There have been a number of high profile examples in the news over the last two years concerning project management.  For example, the initial rollout of the Affordable Care Act marketplace web portal was one of these, and the causes for its faults are still being discussed. As I write this, an article in the New York Times indicates that the fast track efforts to create an Ebola vaccine are faltering…

To read the remainder of this post please to go to this link.

No Bucks, No Buck Rogers — Project Work Authorizations, Change Control, and Cash Flow

As I’ve written here most recently, the most significant proposal coming out of the Integrated Program Management Conference (IPMC) this year was the comprehensive manner of integrating all essential elements of a project, presented by Glen Alleman et al.  In their presentation, Alleman, Coonce, and Price, present a process flow (which, in my estimation, should be mirrored in data and information flow) in which program artifacts were imbued with measures of effectiveness, measures of performance, and measures of progress, to achieve an organic integration of all parts of the project that allow the project team to make a valid assessment of achievement against the plan, informed by risk and opportunity.  (Emphasis my own).  The three-legged stool of cost, schedule, and technical performance are thereby integrated properly at the appropriate level of the project structure, and done in such a way as to overcome the rigidity and fallacy of the single point estimate.

But, as is always the case with elegant models, while they replicate a sufficient portion of reality to allow us to make our assessments using statistical methods, there are other elements that we have purposely left out because our present models do not incorporate them into the normal and normative process.  They are considered situational, and so lie just outside of the process flow, though they insert themselves when necessary–and much more frequently than desired.  I am referring to the availability of money and resources, and the manner in which they affect the project: through work authorizations (WADs) and baseline change requests (BCRs).

I have seen situations where fully 90% of the effort in project management is devoted to manage and adjust the plan based on baseline changes.  This is particularly the case where estimates are poorly developed due to the excuse of uncertainty.  Of course there is uncertainty–that’s the purpose of developing a plan.  The issue isn’t the presence of risk (and opportunity) but that our risks are educated ones, that is, informed by familiarity with similar efforts, engineering assessment, core competency, and other empirical factors.  This is where the most radical elements of the Agile Cult gets it wrong–in focusing on risk and assuming that the only way to realize opportunity is to forgo the empirical process.  This is not only a misreading of risk and opportunity assessment in project management, it is a sort of neo-Luddite position regarding scientific management.

The environment in which a project operates undergoes change.  The framing assumptions of the project determine the expectations of scope, cost, and what defines success.  The concept of framing assumptions was fully developed in a RAND study that I covered in a previous blog post.  Most often, but not always, the change in framing assumptions is reflected in the WAD and BCR process, most often in the latter.  Thus, we have a means of determining and taking account of changes in framing assumptions.  This is in the normal process of project management, as opposed to the more obvious examples of a complete replan or over target baseline (OTB).

So where do we track WADs and BCRs in our processes that will provide us sufficient indicators in our measures of effectiveness, performance, and progress that our resources (both size and type) many not be sufficient or that these changes are sufficient enough that our framing assumptions have changed?  I would argue that the linkage for resources must also be made through the Integrated Master Plan (IMP) and reflect in the IMS, cross-referenced to the PMB.  Technology can provide the remainder of the ability to integrate these elements and provide the process flow necessary to provide early warning.  This integration goes beyond the traditional focus on cost and schedule (and the newly reintroduced emphasis on technical achievement).  It involves integration with resource management systems (personnel, skillset assignments, etc.) as well as financial management systems to determine the availability of money (both its sufficiency and “color”*) being applied to the right place at the right time.

Integrating these elements together then allows for more sophisticated methods of determining project success through the introduction of metrics that provide correlations between the elements.  It also answers, absent politics, the optimum level of both analysis and reporting.

*The “color” of money applies mostly to public investments in which monies appropriated are designed by their purpose:  operations, maintenance, engineering, R&D, etc.

Note: This post was modified to add a point of clarification in applying WADs and BCRs to the PMB.

IMPish Grin — The Connection for Technical Measures (and everything else)

Glen Alleman at his Herding Cats blog has posted his presentation on the manner of integrating technical performance measures in a cohesive and logical manner with project schedule and cost measurement.  Many in the DoD and A&D-focused project community are aware of the work of many of us in this area (my own paper is posted on the College of Performance Management library page here) but the work of Alleman, Coonce, and Price take these concepts a step further.  I wrote an earlier post about the white paper but the presentation demonstrates clearly the flow of logic in constructing not only a model in which technical performance is incorporated into the project plan through measures of effectiveness that are derived from the statement of work, but then makes the connection to measures of progress and measures of performance, clearly outlining the proper integration of the core elements of project planning, execution, and control.

The key artifact that ties the essential elements together: cost, schedule, and technical performance; is the Integrated Master Plan (IMP).  The National Defense Industrial Association Integrated Program Management Division’s (NDIA IPMD) Planning and Scheduling Excellence Guide (PASEG) (link broken) seems to forget this essential step–the artifact that is necessary to allow for the construction of a valid Integrated Master Schedule (IMS).  I think part of the reason for the omission is the mistaken belief that this is an unnecessary artifact–that it is a “nice to have” if the program sponsor remembers to put it in the contact deliverables.  This makes its construction negotiable and vulnerable as a discretionary cost item, which it clearly is not–or, at least, should not be.  Even the Wikipedia entry is confused by the classification of the IMP, characterizing it first as primarily a DoD-specific artifact, a contractual artifact, and–oh, by the way–a necessary step in civic and urban planning (also known as construction project management).  The PASEG does mention summary schedules and perhaps in those rare cases, based on the work being performed and the contract type, some stripped down kind of IMP will do, but regardless of what it is called (and IMP still serves as a good shorthand) then the ability to trace measures of effectiveness to measures of progress is still needed in complex project management.

Thus the IMP is this: it is the fulcrum of integrated project management.  When tying the measures of effectiveness to specific tasks related to the WBS, it is the IMP that provides the roadmap to the working, day-to-day tools that will be used to measure progress–cost, schedule, and technical achievement and assessment against plan, all informed by risk.  For those of us in the technology community, continuing to sell discrete, swim-lane focused apps that do not support this construction are badly out of date.