Rear View Mirror — Correcting a Project Management Fallacy

“The past is never dead. It’s not even past.” —  William Faulkner, Requiem for a Nun

Over the years I and others have briefed project managers on project performance using KPPs, earned value management, schedule analysis, business analytics, and what we now call predictive analytics. Oftentimes, some set of figures will be critiqued as being ineffective or unhelpful; that the analytics “only look in the rear view mirror” and that they “tell me what I already know.”

In approaching this critique, it is useful to understand Faulkner’s oft-cited quote above.  When we walk down a street, let us say it is a busy city street in any community of good size, we are walking in the past.  The moment we experience something it is in the past.  If we note the present condition of our city street we will see that for every building, park, sidewalk, and individual that we pass on that sidewalk, each has a history.  These structures and the people are as much driven by their pasts as their expectations for the future.

Now let us take a snapshot of our street.  In doing so we can determine population density, ethnic demographics, property values, crime rate, and numerous other indices and parameters regarding what is there.  No doubt, if we stop here we are just “looking in the rear view mirror” and noting what we may or may not know, however certain our anecdotal filter.

Now, let us say that we have an affinity for this street and may want to live there.  We will take the present indices and parameters that noted above, which describe our geographical environment, and trend it.  We may find that housing pricing are rising or falling, that crime is rising or falling, etc.  If we delve into the street’s ownership history we may find that one individual or family possesses more than one structure, or that there is a great deal of diversity.  We may find that a Superfund site is not too far away.  We may find that economic demographics are pointing to stagnation of the local economy, or that the neighborhood is becoming gentrified.  Just by time-phasing and delving into history–by mapping out the trends and noting the significant historical background–provides us with enough information to inform us about whether our affinity is grounded in reality or practicality.

But let us say that, despite negatives, we feel that this is the next up-and-coming neighborhood.  We would need signs to make that determination.  For example, what kinds of businesses have moved into the neighborhood and what is their number?  What demographic do they target?  There are many other questions that can be asked to see if our economic analysis is valid–and that analysis would need to be informed by risk.

The fact of the matter is that we are always living with the past: the cumulative effect of the past actions of numerous individuals, including our own, and organizations, groups of individuals, and institutions; not to mention larger economic forces well beyond our control.  Any desired change in the trajectory of the system being evaluated must identify those elements that can be impacted or influenced, and an analysis of the effort that must be expended to bring about the change, is also essential.

This is a scientific fact, proven countless times by physics, biology, and other disciplines.  A deterministic universe, which provides for some uncertainty at any given point at our level of existence, drives the possible within very small limits of possibility and even smaller limits of probability.  What this means in plain language is that the future is usually a function of the past.

Any one number or index, no doubt, does not necessarily tell us something important.  But it could if it is relevant, material, and prompts further inquiry essential to project performance.

For example, let us look at an integrated master schedule that underlies a typical medium-sized project.


We will select a couple of metrics that indicates project schedule performance.  In the case below we are looking at task hits and misses and Baseline Execution Index, a popular index that determines efficiency in meeting baseline schedule planning.

Note that the chart above plots the performance over time.  What will it take to improve our efficiency?  So as a quick logic check on realism, let’s take a look at the work to date with all of the late starts and finishes.

Our bow waves track the cumulative effort to date.  As we work to clear missed starts or missed finishes in a project we also must devote resources to the accomplishment of current work that is still in line with the baseline.  What this means is that additional resources may need to be devoted to particular areas of work accomplishment or risk handling.

This is not, of course, the limit to our analysis that should be undertaken.  The point here is that at every point in history in every system we stand at a point of the cumulative efforts, risk, failure, success, and actions of everyone who came before us.  At the microeconomic level this is also true within our project management systems.  There are also external constraints and influences that will define the framing assumptions and range of possibilities and probabilities involved in project outcomes.

The shear magnitude of the bow waves that we face in all endeavors will often be too great to fully overcome.  As an analogy, a bow wave in complex systems is more akin to a tsunami as opposed to the tidal waves that crash along our shores.  All of the force of all of the collective actions that have preceded present time will drive our trajectory.

This is known as inertia.

Identifying and understanding the contributors to the inertia that is driving our performance is important to knowing what to do.  Thus, looking in the rear view mirror is important and not a valid argument for ignoring an inconvenient metric that may only require additional context.  Furthermore, knowing where we sit is important and not insignificant.  Knowing the factors that put us where we are–and the effort that it will take to influence our destiny–will guide what is possible and not possible in our future actions.

Note:  All charted data is notional and is not from an actual project.

Over at — The Human Equation in Project Management

Approaches to project management have focused on the systems, procedures, and software put in place to determine progress and likely outcomes. These outcomes are usually expressed in terms of cost, schedule, and technical achievement against the project requirements and framing assumptions—the oft-cited three-legged stool of project management.  What is often missing are measures related to human behavior within the project systems environment.  In this article at, I explore this oft ignored dimension.

Post-Blogging NDIA Blues — The Latest News (Project Management Wonkish)

The National Defense Industrial Association’s Integrated Program Management Division (NDIA IPMD) just had its quarterly meeting here in sunny Orlando where we braved the depths of sub-60 degrees F temperatures to start out each day.

For those not in the know, these meetings are an essential coming together of policy makers, subject matter experts, and private industry practitioners regarding the practical and mundane state-of-the-practice in complex project management, particularly focused on the concerns of the the federal government and the Department of Defense.  The end result of these meetings is to publish white papers and recommendations regarding practice to support continuous process improvement and the practical application of project management practices–allowing for a cross-pollination of commercial and government lessons learned.  This is also the intersection where innovation among the large and small are given an equal vetting and an opportunity to introduce new concepts and solutions.  This is an idealized description, of course, and most of the petty personality conflicts, competition, and self-interest that plagues any group of individuals coming together under a common set of interests also plays out here.  But generally the days are long and the workshops generally produce good products that become the de facto standard of practice in the industry. Furthermore the control that keeps the more ruthless personalities in check is the fact that, while it is a large market, the complex project management community tends to be a relatively small one, which reinforces professionalism.

The “blues” in this case is not so much borne of frustration or disappointment but, instead, from the long and intense days that the sessions offer.  The biggest news from an IT project management and application perspective was twofold. The data stream used by the industry in sharing data in an open systems manner will be simplified.  The other was the announcement that the technology used to communicate will move from XML to JSON.

Human readable formatting to Data-focused formatting.  Under Kendall’s Better Buying Power 3.0 the goal of the Department of Defense (DoD) has been to incorporate better practices from private industry where they can be applied.  I don’t see initiatives for greater efficiency and reduction of duplication going away in the new Administration, regardless of what a new initiative is called.

In case this is news to you, the federal government buys a lot of materials and end items–billions of dollars worth.  Accountability must be put in place to ensure that the money is properly spent to acquire the things being purchased.  Where technology is pushed and where there are no commercial equivalents that can be bought off the shelf, as in the systems purchased by the Department of Defense, there are measures of progress and performance (given that the contract is under a specification) that are submitted to the oversight agency in DoD.  This is a lot of data and to be brutally frank the method and format of delivery has been somewhat chaotic, inefficient, and duplicative.  The Department moved to address this by a somewhat modest requirement of open systems submission of an application-neutral XML file under the standards established by the UN/CEFACT XML organization.  This was called the Integrated Program Management Report (IMPR).  This move garnered some improvement where it has been applied, but contracts are long-term, so incorporating improvements though new contractual requirements tends to take time.  Plus, there is always resistance to change.  The Department is moving to accelerate addressing these inefficiencies in their data streams by eliminating the unnecessary overhead associated with specifications of formatting data for paper forms and dealing with data as, well, data.  Great idea and bravo!  The rub here is that in making the change, the Department has proposed dropping XML as the technology used to transfer data and move to JSON.

XML to JSON. Before I spark another techie argument about the relative merits of each, there are some basics to understand here.  First, XML is a language, JSON is simply data exchange format.  This means that XML is specifically designed to deal with hierarchical and structured data that can be queried and where validation and fidelity checks within the data are inherent in the technology. Furthermore, XML is known to scale while maintaining the integrity of the data, which is intended for use in relational databases.  Furthermore, XML is hard to break.  It is meant for editing and will maintain its structure and integrity afterward.

The counter argument encountered is that JSON is new! and uses fewer characters! (which usually turns out to be inconsequential), and people are talking about it for Big Data and NoSQL! (but this happened after the fact and the reason for shoehorning it this way is discussed below).

So does it matter?  Yes and no.  As a supplier specializing in delivering solutions that normalize and rationalize data across proprietary file structures and leverage database capabilities, I don’t care.  I can adapt quickly and will have a proof-of-concept solution out within 30 days of receiving the schema.

The risk here, which applies to DoD and the industry, is that the decision to go to JSON is made only because it is the shiny new thing used by gamers and social networking developers.  There has also been a move to adapt to other uses because of the history of significant security risks that had been found in Java, so much so that an entire Wikipedia page is devoted to them.  Oracle just killed off Java applets, though Java hangs on.  JSON, of course, isn’t Java, but it was designed from birth as JavaScript Object Notation (hence the acronym JSON), with the purpose of handling relatively small bits of data across web servers in a number of proprietary settings.

To address JSON deficiencies relative to XML, a number of tools have been and are being developed to replicate the fidelity and reliability found in XML.  Whether this is sufficient to be effective against a structured LANGUAGE is to be seen.  Much of the overhead that technies complain about in XML is due to the native functionality related to the power it brings to the table.  No doubt, a bicycle is simpler than a Formula One racer–and this is an apt comparison.  Claiming “simpler” doesn’t pass the “So What?” test knowing the business processes involved.  The technology needs to be fit to the solution.  The purpose of data transmission using APIs is not only to make it easy to produce but for it to–you know–achieve the goals of normalization and rationalization so that it can be used on the receiving end which is where the consumer (which we usually consider to be the customer) sits.

At the end of the day the ability to scale and handle hierarchical, structured data will rely on the quality and strength of the schema and the tools that are published to enforce its fidelity and compliance.  Otherwise consuming organizations will be receiving a dozen different proprietary JSON files, and that does not address the present chaos but simply adds to it.  These issues were aired out during the meeting and it seems that everyone is aware of the risks and that they can be addressed.  Furthermore, as the schema is socialized across solutions providers, it will be apparent early if the technology will be able handle the project performance data resulting from the development of a high performance aircraft or a U.S. Navy destroyer.

Something New (Again)– Top Project Management Trends 2017

Atif Qureshi at Tasque, which I learned via Dave Gordon’s blog, went out to LinkedIn’s Project Management Community to ask for the latest tends in project management.  You can find the raw responses to his inquiry at his blog here.  What is interesting is that some of these latest trends are much like the old trends which, given continuity makes sense.  But it is instructive to summarize the ones that came up most often.  Note that while Mr. Qureshi was looking for ten trends, and taken together he definitely lists more than ten, there is a lot of overlap.  In total the major issues seem to the five areas listed below.

a.  Agile, its hybrids, and its practical application.

It should not surprise anyone that the latest buzzword is Agile.  But what exactly is it in its present incarnation?  There is a great deal of rising criticism, much of it valid, that it is a way for developers and software PMs to avoid accountability. Anyone ready Glen Alleman’s Herding Cat’s Blog is aware of the issues regarding #NoEstimates advocates.  As a result, there are a number hybrid implementations of Agile that has Agile purists howling and non-purists adapting as they always do.  From my observations, however, there is an Ur-Agile that is out there common to all good implementations and wrote about them previously in this blog back in 2015.  Given the time, I think it useful to repeat it here.

The best articulation of Agile that I have read recently comes from Neil Killick, whom I have expressed some disagreement on the #NoEstimates debate and the more cultish aspects of Agile in past posts, but who published an excellent post back in July (2015) entitled “12 questions to find out: Are you doing Agile Software Development?”

Here are Neil’s questions:

  1. Do you want to do Agile Software Development? Yes – go to 2. No – GOODBYE.
  2. Is your team regularly reflecting on how to improve? Yes – go to 3. No – regularly meet with your team to reflect on how to improve, go to 2.
  3. Can you deliver shippable software frequently, at least every 2 weeks? Yes – go to 4. No – remove impediments to delivering a shippable increment every 2 weeks, go to 3.
  4. Do you work daily with your customer? Yes – go to 5. No – start working daily with your customer, go to 4.
  5. Do you consistently satisfy your customer? Yes – go to 6. No – find out why your customer isn’t happy, fix it, go to 5.
  6. Do you feel motivated? Yes – go to 7. No – work for someone who trusts and supports you, go to 2.
  7. Do you talk with your team and stakeholders every day? Yes – go to 8. No – start talking with your team and stakeholders every day, go to 7.
  8. Do you primarily measure progress with working software? Yes – go to 9. No – start measuring progress with working software, go to 8.
  9. Can you maintain pace of development indefinitely? Yes – go to 10. No – take on fewer things in next iteration, go to 9.
  10. Are you paying continuous attention to technical excellence and good design? Yes – go to 11. No – start paying continuous attention to technical excellent and good design, go to 10.
  11. Are you keeping things simple and maximising the amount of work not done? Yes – go to 12. No – start keeping things simple and writing as little code as possible to satisfy the customer, go to 11.
  12. Is your team self-organising? Yes – YOU’RE DOING AGILE SOFTWARE DEVELOPMENT!! No – don’t assign tasks to people and let the team figure out together how best to satisfy the customer, go to 12.

Note that even in software development based on Agile you are still “provid(ing) value by independently developing IP based on customer requirements.”  Only you are doing it faster and more effectively.

With the possible exception of the “self-organizing” meme, I find that items through 11 are valid ways of identifying Agile.  Given that the list says nothing about establishing closed-loop analysis of progress says nothing about estimates or the need to monitor progress, especially on complex projects.  As a matter of fact one of the biggest impediments noted elsewhere in industry is the inability of Agile to scale.  This limitations exists in its most simplistic form because Agile is fine in the development of well-defined limited COTS applications and smartphone applications.  It doesn’t work so well when one is pushing technology while developing software, especially for a complex project involving hundreds of stakeholders.  One other note–the unmentioned emphasis in Agile is technical performance measurement, since progress is based on satisfying customer requirements.  TPM, when placed in the context of a world of limited resources, is the best measure of all.

b.  The integration of new technology into PM and how to upload the existing PM corporate knowledge into that technology.

This is two sides of the same coin.  There is always  debate about the introduction of new technologies within an organization and this debate places in stark contrast the differences between risk aversion and risk management.

Project managers, especially in the complex project management environment of aerospace & defense tend, in general, to be a hardy lot.  Consisting mostly of engineers they love to push the envelope on technology development.  But there is also a stripe of engineers among them that do not apply this same approach of measured risk to their project management and business analysis system.  When it comes to tracking progress, resource management, programmatic risk, and accountability they frequently enter the risk aversion mode–believing that the less eyes on what they do the more leeway they have in achieving the technical milestones.  No doubt this is true in a world of unlimited time and resources, but that is not the world in which we live.

Aside from sub-optimized self-interest, the seeds of risk aversion come from the fact that many of the disciplines developed around performance management originated in the financial management community, and many organizations still come at project management efforts from perspective of the CFO organization.  Such rice bowl mentality, however, works against both the project and the organization.

Much has been made of the wall of honor for those CIA officers that have given their lives for their country, which lies to the right of the Langley headquarters entrance.  What has not gotten as much publicity is the verse inscribed on the wall to the left:

“And ye shall know the truth and the truth shall make you free.”

      John VIII-XXXII

In many ways those of us in the project management community apply this creed to the best of our ability to our day-to-day jobs, and it lies as the basis for all of the management improvement from Deming’s concept of continuous process improvement, through the application of Six Sigma and other management improvement methods.  What is not part of this concept is that one will apply improvement only when a customer demands it, though they have asked politely for some time.  The more information we have about what is happening in our systems, the better the project manager and the project team is armed with applying the expertise which qualified the individuals for their jobs to begin with.

When it comes to continual process improvement one does not need to wait to apply those technologies that will improve project management systems.  As a senior management (and well-respected engineer) when I worked in Navy told me; “if my program managers are doing their job virtually every element should be in the yellow, for only then do I know that they are managing risk and pushing the technology.”

But there are some practical issues that all managers must consider when managing the risks in introducing new technology and determining how to bring that technology into existing business systems without completely disrupting the organization.  This takes–good project management practices that, for information systems, includes good initial systems analysis, identification of those small portions of the organization ripe for initial entry in piloting, and a plan of data normalization and rationalization so that corporate knowledge is not lost.  Adopting systems that support more open systems that militate against proprietary barriers also helps.

c.  The intersection of project management and business analysis and its effects.

As data becomes more transparent through methods of normalization and rationalization–and the focus shifts from “tools” to the knowledge that can be derived from data–the clear separation that delineated project management from business analysis in line-and-staff organization becomes further blurred.  Even within the project management discipline, the separation in categorization of schedule analysts from cost analysts from financial analyst are becoming impediments in fully exploiting the advantages in looking at all data that is captured and which affects project performance.

d.  The manner of handling Big Data, business intelligence, and analytics that result.

Software technologies are rapidly developing that break the barriers of self-contained applications that perform one or two focused operations or a highly restricted group of operations that provide functionality focused on a single or limited set of business processes through high level languages that are hard-coded.  These new technologies, as stated in the previous section, allow users to focus on access to data, making the interface between the user and the application highly adaptable and customizable.  As these technologies are deployed against larger datasets that allow for integration of data across traditional line-and-staff organizations, they will provide insight that will garner businesses competitive advantages and productivity gains against their contemporaries.  Because of these technologies, highly labor-intensive data mining and data engineering projects that were thought to be necessary to access Big Data will find themselves displaced as their cost and lack of agility is exposed.  Internal or contracted out custom software development devoted along these same lines will also be displaced just as COTS has displaced the high overhead associated with these efforts in other areas.  This is due to the fact that hardware and processes developments are constantly shifting the definition of “Big Data” to larger and larger datasets to the point where the term will soon have no practical meaning.

e.  The role of the SME given all of the above.

The result of the trends regarding technology will be to put the subject matter expert back into the driver’s seat.  Given adaptive technology and data–and a redefinition of the analyst’s role to a more expansive one–we will find that the ability to meet the needs of functionality and the user experience is almost immediate.  Thus, when it comes to business and project management systems, the role of Agile, while these developments reinforce the characteristics that I outlined above are made real, the weakness of its applicability to more complex and technical projects is also revealed.  It is technology that will reduce the risk associated with contract negotiation, processes, documentation, and planning.  Walking away from these necessary components to project management obfuscates and avoids the hard facts that oftentimes must be addressed.

One final item that Mr. Qureshi mentions in a follow-up post–and which I have seen elsewhere in similar forums–concerns operational security.  In deployment of new technologies a gatekeeper must be aware of whether that technology will not open the organization’s corporate knowledge to compromise.  Given the greater and more integrated information and knowledge garnered by new technology, as good managers it is incumbent to ensure these improvements do not translate into undermining the organization.

Over at — In Defense of Empiricism

What is the responsibility of those in high tech for ensuring that that their products are used in an ethical manner?  That information management is a product of empiricism is self-evident.  Project and business managers who would delude themselves by relying on invalid information usually find themselves facing hard reality in a most unpleasant manner.  How do we separate out the fanciful from the real when information is flattened, highlighted by the issue, raised over the last year, of fake news?  These are the issues that I address in my latest post at  Please check it out.

Takin’ Care of Business — Information Economics in Project Management

Neoclassical economics abhors inefficiency, and yet inefficiencies exist.  Among the core issues that create inefficiencies is the asymmetrical nature of information.  Asymmetry is an accepted cornerstone of economics that leads to inefficiency.  We can see in our daily lives and employment the effects of one party in a transaction having more information than the other:  knowing whether the used car you are buying is a lemon, measuring risk in the purchase of an investment and, apropos to this post, identifying how our information systems allow us to manage complex projects.

Regarding this last proposition we can peel this onion down through its various levels: the asymmetry in the information between the customer and the supplier, the asymmetry in information between the board and stockholders, the asymmetry in information between management and labor, the asymmetry in information between individual SMEs and the project team, etc.–it’s elephants all the way down.

This asymmetry, which drives inefficiency, is exacerbated in markets that are dominated by monopoly, monopsony, and oligopoly power.  When informed by the work of Hart and Holmström regarding contract theory, which recently garnered the Nobel in economics, we have a basis for understanding the internal dynamics of projects in seeking efficiency and productivity.  What is interesting about contract theory is that it incorporates the concept of asymmetrical information (labeled as adverse selection), but expands this concept in human transactions at the microeconomic level to include considerations of moral hazard and the utility of signalling.

The state of asymmetry and inefficiency is exacerbated by the patchwork quilt of “tools”–software applications that are designed to address only a very restricted portion of the total contract and project management system–that are currently deployed as the state of the art.  These tend to require the insertion of a new class of SME to manage data by essentially reversing the efficiencies in automation, involving direct effort to reconcile differences in data from differing tools. This is a sub-optimized system.  It discourages optimization of information across the project, reinforces asymmetry, and is economically and practically unsustainable.

The key in all of this is ensuring that sub-optimal behavior is discouraged, and that those activities and behaviors that are supportive of more transparent sharing of information and, therefore, contribute to greater efficiency and productivity are rewarded.  It should be noted that more transparent organizations tend to be more sustainable, healthier, and with a higher degree of employee commitment.

The path forward where there is monopsony power, where there is a dominant buyer, is to impose the conditions for normative behavior that would otherwise be leveraged through practice in a more open market.  For open markets not dominated by one player as either supplier or seller, instituting practices that reward behavior that reduces the effects of asymmetrical information, and contracting disincentives in business transactions on the open market is the key.

In the information management market as a whole the trends that are working against asymmetry and inefficiency involve the reduction of data streams, the construction of cross-domain data repositories (or reservoirs) that allow for the satisfaction of multiple business stakeholders, and the introduction of systems that are more open and adaptable to the needs of the project system in lieu of a limited portion of the project team.  These solutions exist, yet their adoption is hindered because of the long-term infrastructure that is put in place in complex project management.  This infrastructure is supported by incumbents that are reinforcing to the status quo.  Because of this, from the time a market innovation is introduced to the time that it is adopted in project-focused organizations usually involves the expenditure of several years.

This argues for establishing an environment that is more nimble.  This involves the adoption of a series of approaches to achieve the goals of broader information symmetry and efficiency in the project organization.  These are:

a. Instituting contractual relationships, both internally and externally, that encourage project personnel to identify risk.  This would include incentives to kill efforts that have breached their framing assumptions, or to consolidate progress that the project has achieved to date–sending it as it is to production–while killing further effort that would breach framing assumptions.

b. Institute policy and incentives on the data supply end to reduce the number of data streams.  Toward this end both acquisition and contracting practices should move to discourage proprietary data dead ends by encouraging normalized and rationalized data schemas that describe the environment using a common or, at least, compatible lexicon.  This reduces the inefficiency derived from opaqueness as it relates to software and data.

c.  Institute policy and incentives on the data consumer end to leverage the economies derived from the increased computing power from Moore’s Law by scaling data to construct interrelated datasets across multiple domains that will provide a more cohesive and expansive view of project performance.  This involves the warehousing of data into a common repository or reduced set of repositories.  The goal is to satisfy multiple project stakeholders from multiple domains using as few streams as necessary and encourage KDD (Knowledge Discovery in Databases).  This reduces the inefficiency derived from data opaqueness, but also from the traditional line-and-staff organization that has tended to stovepipe expertise and information.

d.  Institute acquisition and market incentives that encourage software manufacturers to engage in positive signalling behavior that reduces the opaqueness of the solutions being offered to the marketplace.

In summary, the current state of project data is one that is characterized by “best-of-breed” patchwork quilt solutions that tend to increase direct labor, reduces and limits productivity, and drives up cost.  At the end of the day the ability of the project to handle risk and adapt to technical challenges rests on the reliability and efficiency of its information systems.  A patchwork system fails to meet the needs of the organization as a whole and at the end of the day is not “takin’ care of business.”

Technical Foul — It’s Time for TPI in EVM

For more than 40 years the discipline of earned value management (EVM) has gone through a number of changes in its descriptions, governance, and procedures.  During that same time its community has been resistant to improvements in its methodology or to changes that extend its value when taking into account other methods that either augment its usefulness, or that potentially provide more utility in the area of performance management.  This has been especially the case where it is suggested that EVM is just one of many methodologies that contribute to this assessment under a more holistic approach.

Instead, it has been asserted that EVM is the basis for integrated project management.  (I disagree–and solely on the evidence that if it was so, then project managers would more fully participate in its organizations and conferences.  This would then pose the problem that PMs might then propose changes to EVM that, well…default to the second sentence in this post).  As evidence it need only be mentioned that there has been resistance to such recent developments in using earned schedule, technical performance, and risk–most especially risk based on Bayesian analysis).

Some of this resistance is understandable.  First, it took quite a long time just to get to a consensus on the application of EVM, though its principles and methods are based on simple and well proven statistical methods.  Second, the industries in which EVM has been accepted are sensitive to risk, and so a bureaucracy of practitioners have grown to ensure both consensus and compliance to accepted methods.  Third, the community that makes up practitioners of EVM consist mostly of cost analysts, trained in simple accounting, arithmetic, and statistical methodology.  It is thus a normal human bias to assume that the path of one’s previous success is the way to future success, though our understanding of the design space (reality) that we inhabit has been enhanced through new knowledge.  Fourth, there is a lot of data that applies to project management, and the EVM community is only now learning of the ways that this other data impacts our understanding of measuring project performance and the probability of reaching project goals in rolling out a product.  Finally, there is the less defensible reason that a lot of people and firms have built their careers that depends on maintaining the status quo.

Our ability to integrate disparate datasets is accelerating on a yearly basis thanks to digital technology, and the day in achieving integration of all relevant factors in project and enterprise performance is inevitable.  To be frank, I am personally engaged in such projects and am assisting organizations in moving in this direction today.  Regardless, we can make an advance in the discipline of performance management by pulling down low hanging fruit.  The most reachable one, in my opinion, is technical performance measurement.

The literature of technical performance has come quite a long way, thanks largely to the work of the Institute for Defense Analyses (IDA) and others, particularly the National Defense Industrial Association through the publication of their predictive measures guide.  This has been a topic of interest to me since its study was part of my duties back when I was still wearing a uniform.  The early results of these studies resulted in a paper that proposed a method of integrating technical performance, earned value, and risk.  A pretty comprehensive overview of the literature and guidance for technical performance can be found at this presentation by Glen Alleman and Tom Coonce given at EVM World in 2015.  It must be mentioned that Rick Price of Lockheed Martin also contributed greatly to this literature.

Keep in mind what is meant when we decide to assess technical performance within the context of R&D.  It is an assessment against expected or specified:

a.  Measures of Effectiveness (MoE)

b.  Measures of Performance (MoP), and

c.  Key Performance Parameters (KPP)

The opposition from the project management community to widespread application of this methodology took two forms.  First, it was argued, the method used to adjust the value of earned (CPI) seemed always to have a negative impact.  Second, there are technical performance factors that transcend the WBS, and so it is hard to properly adjust the individual control accounts based on the contribution of technical performance.  Third, some performance measures defy an assessment of value in a time-phased manner.  The most common example has been tracking weight of aircraft, which has contributors from virtually all components that go into it.

Let’s take these in order.  But lest one think that this perspective is an artifact from 1997, just a short while ago, in the A&D community, the EVM policy office at DoD attempted to apply a somewhat modest proposal of ensuring that technical performance was included as an element in EVM reporting.  Note that the EIA 748 standard states this clearly and has done so for quite some time.  Regardless, the same three core objections were raised in comments from the industry.  Thus, this caused me to ask some further in-depth questions and my revised perspective follows below.

The first condition occurred, in many cases, due to optimism bias in registering earned value, which often occurs when using a single point estimate of percent complete by a limited population of experts contributing to an assessment of the element.  Fair enough, but as you can imagine, its not a message that a PM wants to hear or will necessarily accept or admit, regardless of the merits.  There are more than enough pathways to second guessing and testing selection bias at other levels of reporting.  Glen Alleman in his Herding Cats blog post of 12 August has a very good post listing the systemic reasons for program failure.

Another factor is that the initial methodology did possess a skewing toward more pessimistic results.  This was not entirely apparent at the time because the statistical methods applied did not make that clear.  But, to critique that first proposal, which was the result of contributions from IDA and other systems engineering technical experts, the 10-50-90 method in assessing probability along the bandwidth of the technical performance baseline was too inflexible.  The graphic that we proposed is as follows and one can see that, while it was “good enough”, if rolled up there could be some bias that required adjustment.

TPM Graphic


Note that this range around 50% can be interpreted to be equivalent to the bandwidth found in the presentation given by Alleman and Coonce (as well as the Predictive Measures Guide), though the intent here was to perform an assessment based on a simplified means of handicapping the handicappers–or more accurately, performing a probabilistic assessment on expert opinion.  The method of performing Bayesian analysis to achieve this had not yet matured for such applications, and so we proposed a method that would provide a simple method that our practitioners could understand that still met the criteria of being a valid approach.  The reason for the difference in the graphic resides in the fact that the original assessment did not view this time-phasing as a continuous process, but rather an assessment at critical points along the technical baseline.

From a practical perspective, however, the banding proposed by Alleman and Coonce take into account the noise that will be experienced during the life cycle of development, and so solves the slight skewing toward pessimism.  We’ll leave aside for the moment how we determine the bands and, thus, acceptable noise as we track along our technical baseline.

The second objection is valid only so far as any alignment of work-related indicators vary from project to project.  For example, some legs of the WBS tree go down nine levels and others go down five levels, based on the complexity of the work and the organizational breakdown structure (OBS).  Thus where we peg within each leg of the tree the control account (CA) and work package (WP) level becomes relative.  Do the schedule activities have a one-to-one relationship or many-to-one relationship with the WP level in all legs?  Or is the lowest level that the alignment can be made in certain legs at the CA level?

Given that planning begins with the contract spec and (ideally) proceed from IMP –> IMS –> WBS –> PMB in a continuity, then we will be able to determine the contributions of TPM to each WBS element at their appropriate level.

This then leads us to another objection, which is that not all organizations bother with developing an IMP.  That is a topic for another day, but whether such an artifact is created formally or not, one must achieve in practice the purpose of the IMP in order to get from contract spec to IMS under a sufficiently complex effort to warrant CPM scheduling and EVM.

The third objection is really a child of the second objection.  There very well may be TPMs, such as weight, with so many contributors that distributing the impact would both dilute the visibility of the TPM and present a level of arbitrariness in distribution that would render its tracking useless.  (Note that I am not saying that the impact cannot be distributed because, given modern software applications, this can easily be done in an automated fashion after configuration.  My concern is in regard to visibility on a TPM that could render the program a failure).  In these cases, as with other indicators that must be tracked, there will be high level programmatic or contract level TPMs.

So where do we go from here?  Alleman and Coonce suggest adjusting the formula for BCWP, where P is informed by technical risk.  The predictive measures guide takes a similar approach and emphasizes the systems engineering (SE) domain in getting to an assessment to determine the impact of reported EVM element performance.  The recommendation of the 1997 project that I headed in assignments across Navy and OSD, was to inform performance based on a risk assessment of probable achievement at each discrete performance milestone.  What all of these studies have in common, and in common with common industry practice using SE principles, is an intermediate assessment, informed by risk, of a technical performance index against a technical performance baseline.

So let’s explore this part of the equation more fully.

Given that we have MoE, MoP, and KPP are identified for the project, different methods of determining progress apply.  This can be a very simplistic set of TPMs that, through the acquisition or fabrication of compliant materials, meet contractual requirements.  These are contract level TPMs.  Depending on contract type, achievement of these KPPs may result in either financial penalties or financial reward.  Then there are the R&D-dependent MoEs, MoPs, and KPPs that require more discrete time-phasing and ties to the physical completion of work documented by through the WBS structure.  As with EVM on the measurement of the value of work, our index of physical technical achievement can be determined through various methods: current EVM methods, simulated Monte Carlo technical risk, 10-50-90 risk assessment, Bayesian analysis, etc.  All of these methods are designed to militate against selection bias and the inherent limitations of limited sample size and, hence, extreme subjectivity.  Still, expert opinion is a valid method of assessment and (in cases where it works) better than a WAG or coin flip.

Taken together these TPMs can be used to determine the technical achievement of the project or program over time, with a financial assessment of the future work needed to bring it in line.  These elements can be weighted, as suggested by Coonce, Alleman, and Price, through an assessment of relative risk to project success.  Some of these TPIs will apply to particular WBS elements at various levels (since their efforts are tied to specific activities and schedules via the IMS), and the most important project and program-level TPMs are reflected at that level.

What about double counting?  A comparison of the aggregate TPIs and the aggregate CPI and SPI will determine the fidelity of the WBS to technical achievement.  Furthermore, a proper baseline review will ensure that double counting doesn’t occur.  If the element can be accounted for within the reported EVM elements, then it need not be tracked separately by a TPI.  Only those TPMs that cannot be distributed or that represent such overarching risk to project success need be tracked separately, with an overall project assessment made against MR or any reprogramming budget available that can bring the project back into spec.

My last post on project management concerned the practices at what was called Google X.  There incentives are given to teams that identify an unacceptably high level of technical risk that will fail to pay off within the anticipated planning horizon.  If the A&D and DoD community is to become more nimble in R&D, it needs the necessary tools to apply such long established concepts such as Cost-As-An-Independent-Variable (CAIV), and Agile methods (without falling into the bottomless pit of unsupported assertions by the cult such as elimination of estimating and performance tracking).

Even with EVM, the project and program management community needs a feel for where their major programmatic efforts are in terms of delivery and deployment, in looking at the entire logistics and life cycle system.  The TPI can be the logic check of whether to push ahead, or finishing the low risk items that are remaining in R&D to move to first item delivery, or to take the lessons learned from the effort, terminate the project, and incorporate those elements into the next generation project or related components or systems.  This aligns with the concept of project alignment with framing assumptions as an early indicator of continued project investment at the corporate level.

No doubt, existing information systems, many built using 1990s technology and limited to line-and-staff functionality, do not provide the ability to do this today.  Of course, these same systems do not take into account a whole plethora of essential information regarding contract and financial management: from the tracking of CLINs/SLINs, to work authorization and change order processing, to the flow of funding from TAB to PMB/MR and from PMB to CA/UB/PP, contract incentive threshold planning, and the list can go on.  What this argues for is innovation and rewarding those technology solutions that take a more holistic approach to project management within its domain as a subset of program, contract, and corporate management–and such solutions that do so without some esoteric promise of results at some point in the future after millions of dollars of consulting, design, and coding.  The first company or organization that does this will reap the rewards of doing so.

Furthermore, visibility equals action.  Diluting essential TPMs within an overarching set of performance metrics may have the effect of hiding them and failing to properly identify, classify, and handle risk.  Including TPI as an element at the appropriate level will provide necessary visibility to get to the meat of those elements that directly impact programmatic framing assumptions.