The Monster Mash — Zombie Ideas in Project and Information Management

Just completed a number of meetings and discussions among thought leaders in the area of complex project management this week, and I was struck by a number of zombie ideas in project management, especially related to information, that just won’t die.  The use of the term zombie idea is usually attributed to the Nobel economist Paul Krugman from his excellent and highly engaging (as well as brutally honest) posts at the New York Times, but for those not familiar, a zombie idea is “a proposition that has been thoroughly refuted by analysis and evidence, and should be dead — but won’t stay dead because it serves a political purpose, appeals to prejudices, or both.”

The point is that to a techie–or anyone engaged in intellectual honesty–is that they are often posed in the form of question begging, that is, they advance invalid assumptions in the asking or the telling.  Most often they take the form of the assertive half of the same coin derived from “when did you stop beating your wife?”-type questions.  I’ve compiled a few of these for this post and it is important to understand the purpose for doing so.  It is not to take individuals to task or to bash non-techies–who have a valid reason to ask basic questions based on what they’ve heard–but propositions put forth by people who should know better based on their technical expertise or experience.  Furthermore, knowing and understanding technology and its economics is really essential today to anyone operating in the project management domain.

So here are a few zombies that seem to be most common:

a.  More data equals greater expense.  I dealt with this issue in more depth in a previous post, but it’s worth repeating here:  “When we inform Moore’s Law by Landauer’s Principle, that is, that the energy expended in each additional bit of computation becomes vanishingly small, it becomes clear that the difference in cost in transferring a MB of data as opposed to a KB of data is virtually TSTM (“too small to measure”).”  The real reason why we continue to deal with this assertion is both political in nature and also based in social human interaction.  People hate oversight and they hate to be micromanaged, especially to the point of disrupting the work at hand.  We see behavior, especially in regulatory and contractual relationships, where the reporting entity plays the game of “hiding the button.”  This behavior is usually justified by pointing to examples of dysfunction, particularly on the part of the checker, where information submissions lead to the abuse of discretion in oversight and management.  Needless to say, while such abuse does occur, no one has yet to point quantitatively to data (as opposed to anecdotally) that show how often this happens.

I would hazard to guess that virtually anyone with some experience has had to work for a bad boss; where every detail and nuance is microscopically interrogated to the point where it becomes hard to make progress on the task at hand.  Such individuals, who have been advanced under the Peter principle must, no doubt, be removed from such a position.  But this often happens in any organization, whether it be in private enterprise–especially in places where there is no oversight, check-and-balances, means of appeal, or accountability–or government–and is irrelevant to the assertion.  The expense item being described is bad management, not excess data.  Thus, such assertions are based on the antecedent assumption of bad management, which goes hand-in-hand with…

b. More information is the enemy of efficiency.  This is the other half of the economic argument to more data equals greater expense.  And I failed to mention that where the conflict has been engaged over these issues, some unjustifiable figure is given for the additional data that is certainly not supported by the high tech economics cited above.  Another aspect of both of these perspectives also comes from the conception of non-techies that more data and information is equivalent to pre-digital effort, especially in conceptualizing the work that often went into human-readable reports.  This is really an argument that supports the assertion that it is time to shift the focus from fixed report formatting functionality in software based on limited data to complete data, which can be formatted and processed as necessary.  If the right and sufficient information is provided up-front, then additional questions and interrogatories that demand supplemental data and information–with the attendant multiplication of data streams and data islands that truly do add cost and drive inefficiency–are at least significantly reduced, if not eliminated.

c.  Data size adds unmanageable complexity.  This was actually put forth by another software professional–and no doubt the non-techies in the room would have nodded their heads in agreement (particularly given a and b above), if opposing expert opinion hadn’t been offered.  Without putting too fine a point on it, a techie saying this to an open forum is equivalent to whining that your job is too hard.  This will get you ridiculed at development forums, where you will be viewed as an insufferable dilettante.  Digitized technology for well over 40 years has been operating under the phenomenon of Moore’s Law.  Under this law, computational and media storage capability doubles at least every two years under the original definition, though that equation has accelerated to somewhere between 12 and 24 months.  Thus, what was considered big data, say, in 1997 when NASA first coined the term, is not considered big data today.  No doubt, what is considered big data this year will not be considered big data two years from now.  Thus, the term itself is relative and may very well become archaic.  The manner in which data is managed–its rationalization and normalization–is important in successfully translating disparate data sources, but the assertion that big is scary is simply fear mongering because you don’t have the goods.

d.  Big data requires more expensive and sophisticated approaches.  This flows from item c above as well and is often self-serving.  Scare stories abound, often using big numbers which sound scary.  All data that has a common use across domains has to be rationalized at some point if they come from disparate sources, and there are a number of efficient software techniques for accomplishing this.  Furthermore, support for agnostic APIs and common industry standards, such as the UN/CEFACT XML, take much of the rationalization and normalization work out of a manual process.  Yet I have consistently seen suboptimized methods being put forth that essentially require an army of data scientists and coders to essentially engage in brute force data mining–a methodology that has been around for almost 30 years: except that now it carries with it the moniker of big data.  Needless to say this approach is probably the most expensive and slowest out there.  But then, the motivation for its use by IT shops is usually based in rice bowl and resource politics.  This is flimflam–an attempt to revive an old zombie under a new name.  When faced with such assertions, see Moore’s Law and keep on looking for the right answer.  It’s out there.

e.  Performance management and assessment is an unnecessary “regulatory” expense.  This one keeps coming up as part of a broader political agenda beyond just project management.  I’ve discussed in detail the issues of materiality and prescriptiveness in regulatory regimes here and here, and have addressed the obvious legitmacy of organizations to establish one in fiduciary, contractual, and governmental environments.

My usual response to the assertion of expense is to simply point to the unregulated derivatives market largely responsible for the financial collapse, and the resulting deep economic recession that followed once the housing bubble burst.  (And, aside from the cost of human suffering and joblessness, the expenses related to TARP).  Thus we know that the deregulation of banking had gone so well.  Even after the Band-Aid of Dodd-Frank the situation probably requires a bit more vigor, and should include the ratings agencies as well as the real estate market.  But here is the fact of the matter: such expenses cannot be monetized as additive because “regulatory” expenses usually represent an assessment of the day-to-day documentation, systems, and procedures required when performing normal business operations and due diligence in management.  I attended an excellent presentation last week where the speaker, tasked with finding unnecessary regulatory expenses, admitted as much.

Thus, what we are really talking about is an expense that is an essential prerequisite to entry in a particular vertical, especially where monopsony exists as a result of government action.  Moral hazard, then, is defined by the inherent risk assumed by contract type, and should be assessed on those terms.  Given the current trend is to raise thresholds, the question is going to be–in the government sphere–whether public opinion will be as forgiving in a situation where moral hazard assumes $100M in risk when things head south, as they often do with regularity in project management.  The way to reduce that moral hazard is through sufficiency of submitted data.  Thus, we return to my points in a and b above.

f.  Effective project assessment can be performed using high level data.  It appears that this view has its origins in both self-interest and a type of anti-intellectualism/anti-empiricism.

In the former case, the bias is usually based on the limitations of either individuals or the selected technology in providing sufficient information.  In the latter case, the argument results in a tautology that reinforces the fallacy that absence of evidence proves evidence of absence.  Here is how I have heard the justification for this assertion: identifying emerging trends in a project does not require that either trending or lower level data be assessed.  The projects in question are very high dollar value, complex projects.

Yes, I have represented this view correctly.  Aside from questions of competency, I think the fallacy here is self-evident.  Study after study (sadly not all online, but performed within OSD at PARCA and IDA over the last three years) have demonstrated that high level data averages out and masks indicators of risk manifestation, which could have been detected looking at data at the appropriate level, which is the intersection of work and assigned resources.  In plain language, this requires integration of the cost and schedule systems, with risk first being noted through consecutive schedule performance slips.  When combined with technical performance measures, and effective identification of qualitative and quantitative risk tied to schedule activities, the early warning is two to three months (and sometime more) before the risk is reflected in the cost measurement systems.  You’re not going to do this with an Excel spreadsheet.  But, for reference, see my post  Excel is not a Project Management Solution.

It’s time to kill the zombies with facts–and to behead them once and for all.

Ace of Base(line) — A New Paper on Building a Credible PMB

Glen Alleman, a leading consultant in program management (who also has a blog that I follow), Tom Coonce of the Institute for Defense Analyses, and Rick Price of Lockheed Martin, have jointly published a new paper in the College of Performance Management’s Measureable News entitled “Building A Credible Performance Measurement Baseline.”

The elements of their proposal for constructing a credible PMB, from my initial reading, are as follows:

1.  Rather than a statement of requirements, decision-makers should first conduct a capabilities gap analysis to determine the units of effectiveness and performance.  This ensures that program management decision-makers have a good idea of what “done” looks like, and ensures that performance measurements aren’t disconnected from these essential elements of programmatic success.

2.  Following from item 1 above, the technical plan and the programmatic plan should always be in sync.

3.  Earned value management is but one of many methods for assessing programmatic performance in its present state.  At least that is how I interpret what the are saying, because later in their paper they propose a way to ensure that EVM does not stray from the elements that define technical achievement.  But EVM in itself is not the end-all or be-all of performance management–and fails in many ways to anticipate where the technical and programmatic plans diverge.

4.  All work in achieving the elements of effectiveness and performance are first constructed and given structure in the WBS.  Thus, the WBS ties together all elements of the project plan.  In addition, technical and programmatic risk must be assessed at this stage, rather than further down the line after the IMS has been constructed.

5.  The Integrated Master Plan (IMP) is constructed to incorporate the high level work plans that are manifested through major programmatic events and milestones.  It is through the IMP that EVM is then connected to technical performance measures that affect the assessment of work package completion that will be reflected in the detailed Integrated Master Schedule (IMS).  This establishes not only the importance of the IMP in ensuring the linkage of technical and programmatic plans, but also makes the IMP an essential artifact that has all too often be seen as optional, which probably explains why so many project managers are “surprised” when they construct aircraft that can’t land on the deck of a carrier or satellites that can’t communicate in orbit, though they are well within the tolerance bands of cost and schedule variances.

6.  Construct the IMS taking into account the technical, qualitative, and quantitative risks associated with the events and milestones identified in the IMP.  Construct risk mitigation/handling where possible and set aside both cost and schedule margins for irreducible uncertainties, and management reserve (MR) for reducible risks, keeping in mind that margin is within the PMB but MR is above the PMB but within the CBB.  Furthermore, schedule margin should be transitioned from a deterministic one to a probabilistic one–constructing sufficient margin to protect essential activities.  Cost margin in work packages should also be constructed in the same manner-based on probabilistic models that determine the chances of making a risk reducible until reaching the point of irreducibility.  Once again, all of these elements tie back to the WBS.

7.  Cost and schedule margin are not the same as slack or float.  Margin is reserve.  Slack or float is equivalent to overruns and underruns.  The issue here in practice is going to be to get the oversight agencies to leave margin alone.  All too often this is viewed as “free” money to be harvested.

8.  Cost, schedule, and technical performance measurement, tied together at the elemental level of work–informing each other as a cohesive set of indicators that are interrelated–and tied back to the WBS, is the only valid method of ensuring accurate project performance measurement and the basis for programmatic success.

Most interestingly, in conclusion the authors present as a simplified case an historical example how their method proves itself out as both a common sense and completely reasonable approach, by using the Wright brothers’ proof of concept for the U.S. Army in 1908.  The historical documents in that case show that the Army had constructed elements of effectiveness and performance in determining whether they would purchase an airplane from brothers.  All measures of project success and failure would be assessed against those elements–which combined cost, schedule, and technical achievement.  I was particularly intrigued that the issue of weight of the aircraft was part of the assessment–a common point of argument from critics of the use of technical performance–where it is demonstrated in the paper how the Wright brothers actually assessed and mitigated the risk associated with that measure of performance over time.

My initial impression of the paper is that it is a significant step forward in bringing together all of the practical lessons learned from both the successes and failures of project performance.  Their recommendations are a welcome panacea to many of the deficiencies implicit in our project management systems and procedures.

I also believe that as an integral part of the process in construction of the project artifacts, that it is a superior approach than the one that I initially proposed in 1997, which assumed that TPM would always be applied as an additional process that would inform cost and schedule at the end of each assessment period.  I look forward to hearing the presentation at the next Integrated Program Management Conference, at which I will attempt some live blogging.

Gimme All Your (Money) — Agile and the Intrinsic Evil of #NoEstimates

Over the years I’ve served as a project, acquisition, and contracts specialist in both public service and private industry.  Most of those assignments involved the introduction of digital technology, from the earliest days of the introduction of what were called mini-computers, through the introduction of the PC, to the various digital devices, robotics, and artificial intelligence that we use today.

A joke I often encountered over the years was that if you asked a software programmer what his solution could do the response all too often was: “what would you like it to do?”  The point of the joke, which has more than a grain of truth in it, is that programmers do not live in (or would prefer not live in) the world of finite resources and often fall into the trap of excessive optimism.  That this is backed by empirical evidence has been discussed previously in this blog, where over 90% of software projects in both private industry and public organizations either fail outright or fail to meet expectations.  This pattern of failure is pervasive regardless of the method of development used: waterfall, spiral, or–the latest rage–Agile.

Agile is a break from the principles of scientific management upon which previous methodologies were based.  As such, it learns no lessons from the past, much as a narcissist rejects the contributions of others.  It is not that all of the ideas that were espoused in the original Agile manifesto in 2001–or those since–are necessarily invalid or may not be good ideas in modifying and improving previous practices, it is that they are based on a declaration without attribution to evidence.  As such, Agile has all of the markings of a cult: an ideology of management that brooks no deviation and which is resistant to evidence.  Faced with contrary evidence the reaction is to double down and push the envelope further.

The latest example of this penchant is by Neil Killick in his post “Beyond #NoEstimates — Why the traditional software contract must die.”  It is worth a read but, in the end, the thrust of the post is to state that contracts enforce accountability and the followers of the Agile Cult don’t want that because, well, there is all of that planning, scheduling, budgeting, and reporting that gets in the way of delivering “value.”  The flaw in the prescriptions of the Cult, particularly its latest #NoEstimates offshoot, has been adequately and thoughtfully documented by many well-respected practitioners of the art of project management such as that by Dave Gordon and others and I will not revisit them here.  Instead, I will focus on Mr. Killick’s article in question.

First, Mr. Killick argues that “value” cannot be derived from the plethora of “traditional” software contracts.  His ignorance of contracting is most clear here for he doesn’t define his terms.  What is a “traditional” software contract?  As a former contract negotiator and contracting officer, I find nothing called a “traditional” software contract in the contracting lexicon.  There are firm fixed price contracts, cost plus type contracts, time and materials/labor hour contracts, etc. but no “traditional” contracts.  For developmental efforts some variation of the cost-plus contract is usually appropriate, but the contract type and structure must be such that it provides sufficient incentives for “value” that exceeds the basic requirements of the customer, the type of effort, the risk involved, and the resource and time constraints involved in the effort.  The scope can be defined by specific line item specifications or a performance specification.  Thus, contrary to the impression left in the post, quite a bit of freedom is allowed within a contract and R&D projects under various contract types have been succeeding for quite a long time.  In addition, the use of the term “traditional” seems to have a certain dog-whistle quality about it for the Cult with its use going back to the original manifesto.  This then, at least to those recognizing the whistle, is a loaded word that leads to an argument that assumes its conclusion: that such contracts lead to poor results, which is not true (assuming a firm definition of “traditional” could be provided) and for which there is sufficient evidence.

Second, another of Mr. Killick’s assumptions is that “traditional” contracts (whatever those are) start from a position of distrust.  In his words: “Working agreements that embrace“Here’s what you must deliver or we’ll sue you”.(sic).”  Once again Mr. Killick demonstrates his ignorance.  The comments and discussion at the end of his post reinforce a narrative that it’s all the lawyers.  I do have a number of friends who are attorneys and my contempt for the frequent excesses of the legal profession is well known to them.  But there is a difference between a contract specialist and a lawyer and it is best summed up in a concept and an anecdote.

The concept is the basic description of a contract which, at its most simple definition, is a promise for a promise.  Usually this takes the form of a promise to perform in return for a promise to pay, since the promise must be sufficient and involve consideration of value in order to establish a contract.  It is not a promise to pay based on a contingent lack of a promise to perform unless, of course, the software developer is willing to allow the contingent nature of the promise to work both ways.  That is, we’ll allow you to expend effort to try to satisfy our needs and we’ll pay you if it is of value at a price that we think your product is worth.  It is not a contract in that case but, at least, both parties know their risks.  The promise for a promise–the rise of the concept of the contract–is in many ways the basis for civilization.  Its rise coincided with and reinforced civil society, and established ground rules for the conduct of human affairs that replaced the contingent nature of relationships between individuals.  Without such ground rules, trust is not possible.

The anecdote explains the aim of a contract and why it is not a lawyer’s game.  This aim was explained to me by one of my closest friends, who is an attorney.  He said: “the difference between you, a contract negotiator, and me, an attorney is that when I come out of the room I know I have done my job when all of the parties are unhappy.  You know you have done your job when all of the parties come out of the room happy.”  Thus, Mr. Killick gets contracting backwards.  The basis of this insightful perspective is based on the different roles of an attorney and a contract negotiator.  An attorney is trained and educated to vehemently defend the interests of its client.  The attorney realizes that he or she engages in a zero-sum game.  The clash of attorneys on opposing sides will usually result in an outcome where neither side feels fully satisfied.  The aim of the contract negotiator (at least the most successful and effective ones) is to determine the goals and acceptable terms for both parties and to find the common ground so that the relationship will proceed under an atmosphere of trust and cooperation.

The most common contract in which many parties engage is the marriage contract.  Such an arrangement can be viewed as an unfortunate obligation that hinders creativeness and acceptance of change, one established by lawyers to enforce the terms of the agreement or else.  But many find that it is a basis for trust and stability, where growth and change are fostered rather than hindered.  In real life, of course, this is a false dilemma.  For most people the arrangement runs the gamut between these perspectives and outside of them to divorce, the ultimate result of a poor or mismatched contract.

For project management in general and software project management in particular, the core arguments in Agile via #NoEstimates are an implicit evil because they undermine the essential relationships between the parties.  This is done through specialized jargon that is designed to obfuscate, the contingent nature of the obligation underlying its principles, and the lack of clear reasoning that forms the basis for its rebellion against planning, estimating, and accountability.  Rather than fostering an atmosphere of trust, it is an attempt for software developers to tip the balance in the project and contract management relationship in their favor, particularly in cases of external customer relationships.  This condition undermines trust and reinforces the most common software project dysfunction, such as the loss of requirements discipline, shifting scope, rubber baselines, and cost overruns.  In other words, for software projects, just more of the same.

Note: Grammatical corrections were made from the original.

I need a dollar dollar, a dollar is what I need (hey hey) — Contract “harvesting”

Are there financial payoffs in our performance management metrics where money can be recouped?

That certainly seems to be the case in the opinion of some contracting officers and program managers, particularly in a time of budgetary constraints and austerity.  What we are talking about are elements of the project, particularly in aerospace & defense work identified by control accounts within a work breakdown structure (WBS), that are using fewer resources than planned.  Is this real money that can be harvested?

Most recently this question arose from the earned value community, in which positive variances were used as the basis for “harvesting” funds to be used to either de-obligate funds or to add additional work.  The reason for this question lies in traditional methods of using earned value methods to reallocate funds within contract.

For example, the first and most common is in relation to completed accounts which have a positive variance.  That is, accounts where the work is completed and they have underspent their budgeted performance management baseline.  Can the resources left over from that work be reallocated and the variances for the completed accounts be set to 1.0?  The obvious answer to this is, yes.  This constitutes acceptable replanning as long as the contract budget base (CBB) is not increased, and there is no extension to the period of performance.  Replans are an effective means for the program team to rebaseline the time-phased performance management baseline (PMB) to internally allocate resources to address risk on those elements that are not performing well against the plan.  Keep in mind that this scenario is very limited.  It only applies to accounts that are completed where actual money will not be expended for effort within the original control accounts.  Also, these resources should be accounted for within the project by first being allocated to undistributed budget (UB), since this money was authorized for specific work.  Contracting officers and the customer program manager will then direct where these undistributed funds will be allocated, whether that be to particular control accounts or to management reserve (MR).

In addition to replanning, there are reprogamming and single point adjustment examples–all of which are adequately covered here and here.

But the issue is not one related to EVM.  The reason I believe it is not lies in the purpose of earned value management as a project management indicator: it measures the achievement (volume) of work against a financial plan in order to derive the value of the work performed at any particular stage in the life of a project.  It does this not only as an oversight and assessment mechanism, but also to provide early warning of the manifestation of risk that threatens the successful execution of the project.  Thus, though it began life in the financial management community to derive the value of work performed at a moment in the project plan, it is not a financial management tool.  The “money” identified in our earned value management systems is not real in the sense that it exists, other than as an indicator of progress against a financial plan of work accomplishment.  To treat this as actual money is to commit the fallacy of reification.  That is, to treat an abstraction as if it is the real thing.

The proper place to determine the availability of funds lies in the financial accounting system.  The method used for determining funds, particularly in government related contract work, is to first understand the terminology and concepts of funding.  Funds can be committed, obligated, or expended.  Funds are committed when they are set aside administratively to cover an anticipated liability.  Funds are obligated when there is a binding agreement in place, such as a contract or purchase order.  Funds are expended when the obligation is paid.

From a contracting perspective, commitments are generally available because they have not been obligated.  Obligated funds can be recovered under certain circumstances as determined by the rules relating to termination liability.  Thus, the portion of the effort in which funds are de-obligated is considered to be a termination for convenience.  Thus, we find our answer to the issue of “harvesting.”

Funds can be de-obligated from a contract as long as sufficient funds remain on the contract to cover the amount of the remaining obligation plus any termination liability.  If the contracting officer and program manager wish to use “excess” funds due to the fact that the project is performing better than anticipated under the negotiated contract budget base, then they have the ability to de-obligate those funds.  That money then belongs to the source of the funding, not the contracting officer or the program manager, unless one of them is the “owner” of the funds, that is, in government parlance, the budget holder.  Tradeoffs outside of the original effort, particularly those requiring new work, must be documented with a contract modification.

From an earned value management and project management perspective, then, we now know what to do.  For control accounts that are completed we maintain the history of the effort, even though the “excess” funds being de-obligated from the contract are reflected in positive variances.  For control accounts modified by the de-obligation in mid-effort, we rebaseline that work.  New work is added to the project plan separately and performance tracked in according to the guidance found in the systems description of the project.

One note of caution:  I have seen where contracting officers in Department of Defense work rely on the Contract Funds Status Reports (CFSR) to determine the availability of funds for de-obligation.  The CFSR is a projection of funding obligations and expenditures within the project and does not constitute a contractual obligation on the burn rate of funding.  Actual obligations and expenditures will vary by all of the usual circumstances that affect project and contract performance.  Thus, contracting officers who rely on this document risk both disrupting project management execution and running into an Anti-Deficiency Act violation.

In summary, apart from the external circumstances of a tight budgetary environment that has placed extra emphasis on identifying resources, good financial management housekeeping dictates that accountable personnel as a matter of course be diligent in identifying and recouping uncommitted, unobligated, and unexpended funds.  This is the “carry-over” often referred to by public administration professionals.  That earned value is used as an early indicator of these groups is a proactive practice.  But contracting officers and program managers must understand that it is only that–an indicator.

These professionals must also understand the nature of the work and the manner of planning.  I have seen cases, particularly in software development efforts, where risk did not manifest in certain accounts until the last 5% to 10% of work was scheduled to be performed.  No doubt this was due to front-loaded planning, pushing risk to the right, and some other defects in project structure, processes, and planning.  Regardless, these conditions exist and it behooves contracting and project professionals to be aware that work that appears to be performing ahead of cost and schedule plans may reflect a transient condition.

Note:  This content of this article was greatly influenced by the good work of Michael Pelkey in the Office of the Secretary of Defense, though I take full responsibility for the opinions expressed herein, which are my own.

Close to the (Leading) Edge

Much has been made in project management circles that most of the indicators that we have to work with are lagging indicators. These are indicators that record what happened but don’t inform the future.  But is that true in most cases?  If not, which indicators are leading and what new leading indicators do we need to be looking at in constructing a project management toolkit–those at the leading edge?

In order to determine this we also need to weed out those measures that are neither lagging nor leading.  These are diagnostic measures that simply indicate the health of a system.  For example, inventory accuracy in the latest warehouse samples, the number of orders successfully fulfilled on the initial contact, etc.  These oftentimes are most effectively used in tracking iterative, industrial-like activities.

Even leading indicators differ.  A leading indicator for a project is not like a leading indicator for larger systems.  Even complex projects possess limited data points for assessing performance.  For some of these projects, where the span of control does not exceed the capabilities of a leader this usually does not matter.  In those cases, very simple indicators are pretty good at determining trends as long as the appropriate measures are used.  It is also at this level that large changes can occur very rapidly since minor adjustments to the project, underlying technology, or personnel dynamics will determine the difference between success and failure.  Thus, our models of social systems at this level often assume rational human behavior or, to use the colloquialism, “all things being equal” in order to predict trends.  This is also the level at which our projects are valid for shorter spans, since overwhelming external or internal events or systems can present risks that the simple system will not be able to fully overcome, when “all things are not equal.”

More complex systems, particularly in project management, are able to employ risk handling methodologies, either overtly or unconsciously.  Needless to say the former usually garners more reliable and positive results.  In this environment the complaint for EVM is that it is “merely looking in the rear view mirror” with the implication that it is not of great value to project or program managers.  But is this entirely true?  I suspect that some of this is excuse peddling because of what I refer to as the Cult of Optimism.

There is no doubt that there is a great deal of emphasis on what has occurred in the past.  Calculations of work performed, resources expended, and time passed; all of these record what has happened.  Many of these are lagging indicators, but they also contain essential information in informing the future.  The key in understanding what is a leading indicator is understanding the dynamics of causation.  Some lagging indicators have overlap.  Some are better at measuring a particular dimension of the project than others.  It is important to know the difference so that the basis for leading indicators can be selected.

For example, regression is a popular method in determining EAC/ETC predictions.  The problem is that these results are often based on very weak correlations and causality.  It is a linear method that is used to measure outcomes in a non-linear system.  This then causes us to introduce other methods to strengthen the credibility of the estimate through engineering and expert opinion.  When possible, parametrics are introduced.  But does this get us there?  I would posit that it doesn’t because it is simply a way of throwing everything at hand at the issue in hopes that something hits the target.  Instead, our methods must be more precise and our selection methods proven. I would posit that starting with something like the Granger causality test in comparing two different time series and determining if one is predictive of another would be useful in weeding out the wheat from the chaff.

For example, years ago I headed a project to identify and integrate technical performance into project management performance measures.  Part of the study we conducted included a retrospective analysis to determine correlations and causality between key technical performance measures over the life of a project and its performance over time.  What we found is that, carefully chosen, TPMs are a strong indicator of future performance when informed by probabilities over the time series.

So is it true that we are “merely” looking in the rear view mirror?  No.  We only have the past to inform the future, thus the comment is irrelevant.  Besides “the past is never dead.  It’s not even past…”  The aggregation of our efforts to achieve the goals of the project will determine the likely outcomes in the future.  So the challenge isn’t looking in the rear view mirror–after all in real life we do this all the time to see what’s gaining on us–but in picking those elements from past performance that, given correlation and causality, will inform the probabilities of future outcomes.

Note:  Grammatical changes made to the post.

I’ve Really Got to Use My (Determinism) to think of good reasons to keep on keeping on

Where do we go from here?

Made some comments on the use of risk in estimates to complete (ETC)/estimates at complete (EAC) in a discussion at LinkedIn’s Earned Value Management Group.  This got me thinking about the current debate on free will between Daniel Dennett and Sam Harris.  Jonathan MS Pearce at A Tippling Philosopher, who wrote a book about Free Will, has some interesting perspectives on the differences between them.  Basically the debate is over the amount of free will that any individual actually possesses.

The popular and somewhat naïve conception of free will assumes that the individual (or organizations, or societies, etc.) is an unhindered free agent and can act on his or her will as necessary.  We know this intuitively to be untrue but it still infects our value judgments and many legal, moral, ethical, and societal reactions to the concepts of causality, responsibility, and accountability.  We know from science, empiricism, and our day-to-day experience that the universe acts in a somewhat deterministic manner.  The question is: how deterministic is it?

In my youth I was a fan of science fiction in general and Isaac Asimov’s books in particular.  One concept that intrigued me from his Foundation series was psychohistory, developed by the character Hari Seldon.  Through the use of psychohistory Seldon could determine when a society was about to go into cultural fugue and the best time to begin a new society in order to save civilization.  This line of thought actually had a basis in many post-World War II hypotheses to explain the mass psychosis that seemed to grip Nazi Germany and Imperial Japan.  The movie The White Ribbon explored such a proposition, seeming to posit that the foundation for the madness that was to follow had its roots much earlier in German society’s loss of compassion, empathy, and sympathy.  Perhaps the cataclysm that was to occur was largely inevitable given the conditions, which seemed too small and insignificant by themselves.

So in determining what will happen and where we will go we must first determine where we are.  Depending on what is being measured there are many qualitative and quantitative ways to determine our position relative to society, where we want to be, or any other relative measurement.  As I said in a post in looking at the predictive measurements of the 2012 election as project management, especially in the predictive methodology employed by Nate Silver, “we are all dealt a deck of cards by the universe regardless of what we undertake, whether an accident of birth, our socioeconomic position, family circumstance, or our role in a market, business or project enterprise.  The limitations on our actions—our free will—are dictated by the bounds provided by the deal.  How we play those cards within the free will provided will influence the outcome.  Sometimes the cards are stacked against us and sometimes for us.  I believe that in most cases the universe provides more than a little leeway that provides for cause and effect.  Each action during the play provides additional deterministic and probabilistic variables.  The implications for those who craft policy or make decisions in understanding these concepts are obvious.”

So how does this relate to project management since many of these examples–even the imaginary one–deal with larger systems with much less uncertainty and paucity of data?  Well, we do have sufficient data when we lengthen the timeframe and actually collect data.

Dr. David Christenson in looking at DoD programs determined that CPI at the 20% mark did not change significantly at completion.  Later this observation was further refined by looking at project performance after the disastrous Navy A12 contract had had its remedial effects on project management.  This conclusion provided both a confirmation of the validity of CPI and the EVM methods that undergird it, and the amount of influence that actions have in determining the ultimate success or failure of the project after the foundation has been laid.  Subsequent studies have strengthened the deterministic observation made by Dr. Christensen of project performance.

The models that I have used incorporate technical performance measures with EVM, cost, schedule, and risk in determining future project performance.  But the basis for the determination of future project performance is a measurement of the present condition at a point in time, usually tracked along a technical baseline. Thus, our assessment of future performance is based on where our present position is fixed and the identification of the range of probabilities that are most likely to result.  The probabilities keep us grounded in reality.  The results address both contingency and determinism in day-to-day analysis.  This argues for a broader set of measurements so that the window of influence in determining outcomes is maximized.

So are we masters of our own destiny?  Not entirely and not in the manner that the phrase suggests.  Our options are limited to our present position and circumstances.  Our outcomes are limited by probability.

To be (or not to be) Integrated Project Management

At the last meeting of the NDIA Project Management Systems Committee the group voted overwhelmingly to rename itself the Integrated Program Management Division.  This is a significant shift in the thinking of industry and government when it comes to project management, though it may not have been fully comprehended by all attendees at the time.  From my discussions with colleagues many believe that earned value management (EVM) is the basis for integrated project management, but I will have to respectfully disagree.  EVM is an assessment technique that may or may not be the basis for integration but that is orders of magnitude from stating that EVM is the basis for integration.  So what is the basis?

The basis for all project planning is, well, the PLAN.  That is, in formal parlance, the integrated master plan or–using standard acronym–the IMP.  Yet this document garners only a passing reference in the scheduling guide for project management; and as a voluntary artifact at that.  This forms the basis for the integrated master schedule (IMS).  And yes, I know that I have not touched on estimates in forming the plan and whether they will be bottom-up or top-down, whether the schedule is resource-loaded, where rates are applied, how the time-phasing of cost is measured, and how technical performance–another topic that has recently gained new life–is taken into account.  But here is where the discussion begins.

Notice that I have yet to mention EVM.  It, of course, is part of project performance assessment built off of all of these antecedent systems.  It may very well turn out that EVM is the appropriate intersection of IPM in terms of performance and assessment, but there are also other leading edge indicators based on schedule, technical performance, and the time-phased plan that are yet to be taken into account.

My own opinion is that true integrated project management follows the course of estimate to IMP to IMS to resources to cost management, including risk-adjusted time-phasing, and project performance measures that include contributions from EVM, schedule, risk, and technical performance.  We may find that there are also contributors from the other five business systems identified by DoD for assessment by DCAA and DCMA.  To date the approach has been to apply the rule as a means of withholding funds from a contract where the systems are deemed inadequate, but a more pro-active approach that includes self-assessment and measurement of project impact would militate against withholds and cause the desired improvement in business processes desired by DoD.