Over at AITS.org — Failure is not Optional

My latest is at this link at AITS.org with the provocative title: “Failure is not Optional: Why Project Failure is OK.”  The theme and specifics of the post, however, are not that simple and I continue with a sidebar on Grant’s conduct of the Overland Campaign entitled “How Grant Leveraged Failure in the Civil War.”  A little elaboration is in place once you read the entire post.

I think what we deal with in project management are shades of failure.  It is important to understand this because we rely too often on projections of performance that oftentimes turn out to be unrealistic within the framing assumptions of project management.  In this context our definition of what defines success turns out to be fluid.

To provide a simplistic example of other games of failure, let’s take the game of American baseball.  A batter who hits safely more than 30% of the time is deemed to be skilled in the art of hitting a baseball.  A success.  Yet, when looked at it from a total perspective what this says is that 70% failure is acceptable.  A pitcher who gives up between 2 and 4 earned runs a game is considered to be skilled in the art of pitching.  Yet, this provides a range of acceptable failure under the goal of giving up zero runs.  Furthermore, if your team wins 9-4 you’re considered to be a winning pitcher.  If you lose 1-0 you are a losing pitcher, and there are numerous examples of talented pitchers who were considered skilled in their craft who had losing records because of lack of run production by his team.  Should the perception of success and failure be adjusted based on whether one pitched for the 1927 or 1936 or 1998 Yankees, or the 1963 Dodgers, or 1969 Mets?  The latter two examples were teams built on just enough offense to provide the winning advantage, with the majority of pressure placed on the pitching staff.  Would Tom Seaver be classified as less extraordinary in his skill if he averaged giving up half a run more?  Probably.

Thus, when we look at the universe of project management and see that the overwhelming majority of IT projects fail, or that the average R&D contract realizes a 20% overrun in cost and a significant slip in schedule, what are we measuring?  We are measuring risk in the context of games of failure.  We handle risk to absorb just enough failure and noise in our systems to both push the envelope on development without sacrificing the entire project effort.  To know the difference between transient and existential failure, between learning and wasted effort, and between intermediate progress and strategic position requires a skillset that is essential to ultimate achievement of the goal, whether it be deployment of a new state-of-the-art aircraft, or a game-changing software platform.  The noise must pass what I have called the “so-what?” test.

I have listed a set of skills necessary to the understanding these differences in the article that you may find useful.  I have also provided some ammunition for puncturing the cult of “being green.”

River Deep, Mountain High — A Matrix of Project Data

Been attending conferences and meetings of late and came upon a discussion of the means of reducing data streams while leveraging Moore’s Law to provide more, better data.  During a discussion with colleagues over lunch they asked if asking for more detailed data would provide greater insight.  This led to a discussion of the qualitative differences in data depending on what information is being sought.  My response to more detailed data was to respond: “well there has to be a pony in there somewhere.”  This was greeted by laughter, but then I finished the point: more detailed data doesn’t necessarily yield greater insight (though it could and only actually looking at it will tell you that, particularly in applying the principle of KDD).  But more detailed data that is based on a hierarchical structure will, at the least, provide greater reliability and pinpoint areas of intersection to detect areas of risk manifestation that is otherwise averaged out–and therefore hidden–at the summary levels.

Not to steal the thunder of new studies that are due out in the area of data later this spring but, for example, I am aware after having actually achieved lowest level integration for extremely complex projects through my day job, that there is little (though not zero) insight gained in predictive power between say, the control account level of a WBS and the work package level.  Going further down to element of cost may, in the words of the character in the movie Still Alice, where “You may say that this falls into the great academic tradition of knowing more and more about less and less until we know everything about nothing.”  But while that may be true for project management, that isn’t necessarily so when collecting parametrics and auditing the validity of financial information.

Rolling up data from individually detailed elements of a hierarchy is the proper way to ensure credibility.  Since we are at the point where a TB of data has virtually the same marginal cost of a GB of data (which is vanishingly small to begin with), then the more the merrier in eliminating the abuse associated with human-readable summary reporting.  Furthermore, I have long proposed through this blog and elsewhere, that the emphasis should be away from people, process, and tools, to people, process, and data.  This rightly establishes the feedback loop necessary for proper development and project management.  More importantly, the same data available through project management processes satisfy the different purposes of domains both within the organization, and of multiple external stakeholders.

This then leads us to the concept of integrated project management (IPM), which has become little more than a buzz-phrase, and receives a lot of hand waves, mostly by technology companies that want to push their tools–which are quickly becoming obsolete–while appearing forward leaning.  This tool-centric approach is nothing more than marketing–focusing on what the software manufacturer would have us believe is important based on the functionality baked into their applications.  One can see where this could be a successful approach, given the emphasis on tools in the PM triad.  But, of course, it is self-limiting in a self-interested sort of way.  The emphasis needs to be on the qualitative and informative attributes of available data–not of tool functionality–that meet the requirements of different data consumers while minimizing, to the extent possible, the number of data streams.

Thus, there are at least two main aspects of data that are important in understanding the utility of project management: early warning/predictiveness and credibility/traceability/fidelity.  The chart attached below gives a rough back-of-the-envelope outline of this point, with some proposed elements, though this list is not intended to be exhaustive.

PM Data Matrix

PM Data Matrix

In order to capture data across the essential elements of project management, our data must demonstrate both a breadth and depth that allows for the discovery of intersections of the different elements.  The weakness in the two-dimensional model above is that it treats each indicator by itself.  But, when we combine, for example, IMS consecutive slips with other elements listed, the informational power of the data becomes many times greater.  This tells us that the weakness in our present systems is that we treat the data as a continuity between autonomous elements.  But we know that the project consists of discontinuities where the next level of achievement/progress is a function of risk.  Thus, when we talk about IPM, the secret is in focusing on data that informs us what our systems are doing.  This will require more sophisticated types of modeling.

Repeat after me — Excel is not a project management solution

Aside from dealing with organizations that oftentimes must use Excel as workarounds due to limitations of legacy software systems, I was reminded of the ubiquity of Excel in a recent article by my colleague Dave Gordon at AITS on the use and misuse of RAID (Risk Assumptions, Issues, and Decisions).  His overall assessment of the weakness of how RAID can be applied is quite valid.  But the literature on risk is quite extensive.  The article “Risk Management Is How Adults Manage Projects” at Glen Alleman’s Herding Cats blog is just one quick overview of a very mature process that has a large amount of academic, statistical, mathematical, and methodological grounding.

The dangers that Dave is identifying are not implicit in RAID so much as they are implicit in anyone who uses Excel.  It is not that Excel is a bad tool.  It is very useful for one-off spreadsheet problems.  It is not a software solution and is not meant to be one.  Going to Microsoft’s site on the progression of Excel to Access to SQL Server clarifies the differences.

Note that in my title I didn’t use the word tool.  This word has been the source of great confusion.  To the layman it seems to be a common-sense descriptive.  When you physically work on something–a home repair, an automobile, a small bit of machinery–you have a toolbox and you have a set of tools to address the problem.  So the analogy is that projects (and other processes) should be approached the same way.  Oftentimes this is done in the shorthand of marketing.  It’s easier to explain something complex through a simple analogy.

But no descriptive has been more harmful or destructive in preventing people from fully conceiving of an understanding of the needed solutions in the project management community than the word tool when applied to software applications.  A project is a complex system–a complex adaptive system.  What is called for are software applications that fit into the manner that project systems operate.

Back in the 1980s and 1990s there was a great deal of justification to take the spreadsheet software that came with the operating environment or the Office Suite and adapt it administrative needs.  There were a great number of individual tasks and processes that needed to be automated, and the market had not yet responded to that demand.  In some cases the demand was only vaguely understood, and the solutions were not fully conceptualized.

Then, as software applications became more sophisticated to begin to replace manual processes, they oftentimes could not address all of the corollary operations that need to be performed in those systems.  Oftentimes, the solutions addressed the 80% need of these requirements, and so one-off workarounds until the software was developed to be more comprehensive, were applied.  The very process of automating previously manual tasks had an effect on organizational processes, driving them toward greater sophistication.  The phenomena of Knowledge Discovery in Databases (KDD) is but one of these effects.

But note that when relying on Excel for important processes, such as risk handling, that KDD is impossible.  An accountant or head of finance in a corporation would not use Excel to keep the company’s books, even though that was the original target audience back in the 1980s for spreadsheet software.  No doubt that financial personnel use Excel to supplement their work, but using it as the primary solution to constitute the system of record would be foolhardy.  Even very small businesses today use more sophisticated financial management solutions.

The reason why one must be extra careful with any process when using Excel as the solution is that the person is still to a large extent, a part of the computer.  The person must perform operations that a spreadsheet application cannot perform.  This is why in risk management that assumptions, issues, decisions, and handling are tied to the work.  The origin of all work decomposed from the requirements are the plan and then the detailed schedule.  The detailed schedule, consisting of activities, is then further decomposed into the work organization: tasks, resources, etc.  There may or may not be a WBS or OBS that ties performance reflected in terms of earned value.

Using Excel as an external tool in addressing this important and essential process separates it from the other elements of project management systems.  It thus creates a single point of failure in the process–the mind of the individual keeping the Excel spreadsheet.  It also containerizes the information, preventing it from being mainstreamed into the larger organization, and thus being a source of KDD.

Over at AITS.org — Black Swans: Conquering IT Project Failure & Acquisition Management

It’s been out for a few days but I failed to mention the latest article at AITS.org.

In my last post on the Blogging Alliance I discussed information theory, the physics behind software development, the economics of new technology, and the intrinsic obsolescence that exists as a result. Dave Gordon in his regular blog described this work as laying “the groundwork for a generalized theory of managing software development and acquisition.” Dave has a habit of inspiring further thought, and his observation has helped me focus on where my inquiries are headed…

To read more please click here.

Margin Call — Schedule Margin and Schedule Risk

A discussion at the LinkedIn site for the NDIA IPMD regarding schedule margin has raised some good insight and recommendations for this aspect of project planning and execution.  Current guidance from the U.S. Department of Defense for those engaged in the level of intense project management that characterizes the industry has been somewhat vague and open to interpretation.  Some of this, I think, is due to the competing proprietary lexicon from software manufacturers that have been dominant in the industry.

But mostly the change in defining this term is due to positive developments.  That is, the change is due to the convergence garnered from long experience among the various PM disciplines that allow us to more clearly define and distinguish between schedule margin, schedule buffer, schedule contingency, and schedule reserve.  It is also due to the ability of more powerful software generations to actually apply the concept in real planning without it being a thumb in the air-type exercise.

Concerning this topic, Yancy Qualls of Bell Helicopter gave an excellent presentation at the January NDIA IPMD meeting in Tucson.  His proposal makes a great deal of sense and, I think, is a good first step toward true integration and a more elegant conceptual solution.  In his proposal, Mr. Qualls clearly defines the scheduling terminology by drawing analogies to similar concepts on the cost side.  This construction certainly overcomes a lot of misconceptions about the purpose and meaning of these terms.  But, I think, his analogies also imply something more significant and it is this:  that there is a common linkage between establishing management reserve and schedule reserve, and there are cost/schedule equivalencies that also apply to margin, buffer, and contingency.

After all, resources must be time-phased and these are dollarized.  But usually the relationship stops there and is distinguished by that characteristic being measured: measures of value or measures of timing; that is, the value of the work accomplished against the Performance Management Baseline (PMB) is different from the various measures of progress recorded against the Integrated Master Schedule (IMS).  This is why we look at both cost and schedule variances on the value of work performed from a cost perspective, and physical accomplishment against time.  These are fundamental concepts.

To date, the most significant proposal advanced to reconcile the two different measures was put forth by Walt Lipke of Oklahoma City Air Logistics Center in the method known as earned schedule.  But the method hasn’t been entirely embraced.  Studies have shown that it has its own limitations, but that it is a good supplement those measures currently in use, not a substitute for them.

Thus, we are still left with the need of making a strong, logical, and cohesive connection between cost and schedule in our planning.  The baseline plans constructed for both the IMS and PMB do not stand apart or, at least, should not.  They are instead the end result of a continuum in the construction of our project systems.  As such, there should be a tie between cost and schedule that allows us to determine the proper amount of margin, buffer, and contingency in a manner that is consistent across both sub-system artifacts.

This is where risk comes in and the correct assessment of risk at the appropriate level of measurement, given that our measures of performance are being measured against different denominators.  For schedule margin, in Mr. Qualls’ presentation, it is the Schedule Risk Analysis (SRA).  But this then leads us to look at how that would be done.

Fortuitously, during this same meeting, Andrew Uhlig of Raytheon Missile Systems gave an interesting presentation on historical SRA results, building models from such results, and using them to inform current projects.  What I was most impressed with in this presentation was that his research finds that the actual results from schedule performance do not conform to any of the usual distribution curves found in the standard models.  Instead of normal, triangle, or pert distributions, what he found is a spike, in which a large percentage of the completions fell exactly on the planned duration.  Thus, distribution was skewed around the spike, with the late durations–the right tail–much longer than the left.

What is essential about the work of Mr. Uhlig is that, rather than using small samples with their biases, he using empirical data to inform his analysis.  This is a pervasive problem in project management.  Mr. Qualls makes this same point in his own presentation, using the example of the Jordan-era Chicago Bulls as an example, where each subsequent win–combined with probabilities that show that the team could win all 82 games–does not mean that they will actually perform the feat.  In actuality (and in reality) the probability of this occurring is quite small.  Glen Alleman at his Herding Cats blog covers this same issue, emphasizing the need for empirical data.

The results of the Uhlig presentation are interesting, not only because they throw into question the results using the three common distributions used in schedule risk analysis under simulated Monte Carlo, but also because they may suggest, in my opinion, an observation or reporting bias.  Discrete distribution methods, as Mr. Uhlig proposes, will properly model the distribution for such cases using our parametric analysis.  But they will not reflect the quality of the data collected.

Short duration activities are designed to overcome subjectivity through their structure.  The shorter the duration, the more discrete the work being measured, the less likely occurrence of “gaming” the system.  But if we find, as Mr. Uhlig does, that 29% of 20 day activities report exactly 20 days, then there is a need to test the validity of the spike itself.  It is not that it is necessarily wrong.  Perhaps the structure of the short duration combined with the discrete nature of the linkage to work has done its job.  One would expect a short tail to the left and a long tail to the right of the spike.  But there is also a possibility that variation around the target duration is being judged as “close enough” to warrant a report of completion at day 20.

So does this pass the “So What?” test?  Yes, if only because we know that the combined inertia of all of the work performed at any one time on the project will eventually be realized in the form of a larger amount of risk in proportion to the remaining work.  If the reported results are pushing risk to the right because the reported performance is optimistic against the actual performance, then we will get false positives.  If the actual performance is pessimistic against actual performance–a less likely scenario in my opinion–then we will get false negatives.

But regardless of these further inquiries that I think need to be made regarding the linkage between cost and schedule, and the validity of results from SRAs, we now have two positive steps in the right direction in clarifying areas that in the past have perplexed project managers.  Properly identifying schedule reserve, margin, buffer, and contingency, combined with properly conducting SRAs using discrete distributions based on actual historical results will go quite far in allowing us to introduce better predictive measures in project management.

Out of Winter Woodshedding — Thinking about Project Risk and passing the “So What?” test

“Woodshedding” is a slang term in music, particularly in relation to jazz, in which the musician practices on an instrument usually outside of public performance, the purpose of which is to explore new musical insights without critical judgment.  This can be done with or without the participation of other musicians.  For example, much attention recently has been given to Bob Dylan’s Basement Tapes release.  Usually it is unusual to bother recording such music, given the purpose of improvisation and exploration, and so few additional examples of “basement tapes” exist from other notable artists.

So for me the holiday is a sort of opportunity to do some woodshedding.  The next step is to vet such thoughts on informal media, such as this blog, where the high standards involved in white and professional papers do not allow for informal dialogue and exchange of information, and thoughts are not yet fully formed and defensible.  My latest mental romps have been inspired by the movie about Alan Turing–The Imitation Game–and the British series The Bletchley Circle.  Thinking about one of the fathers of modern computing reminded me that the first use of the term “computer” referred to people.

As a matter of fact, though the terminology now refers to the digital devices that have insinuated themselves into every part of our lives, people continue to act as computers.  Despite fantastical fears surrounding AI taking our jobs and taking over the world, we are far from the singularity.  Our digital devices can only be programmed to go so far.  The so-called heuristics in computing today are still hard-wired functions, similar to replicating the methods used by a good con artist in “reading” the audience or the mark.  With the new technology in dealing with big data we have the ability to many of the methods originated by the people in the real life Bletchley Park of the Second World War.  Still, even with refinements and advances in the math, they provide great external information regarding the patterns and probable actions of the objects of the data, but very little insight into the internal cause-and-effect that creates the data, which still requires human intervention, computation, empathy, and insight.

Thus, my latest woodshedding has involved thinking about project risk.  The reason for this is the emphasis recently on the use of simulated Monte Carlo analysis in project management, usually focused on the time-phased schedule.  Cost is also sometimes included in this discussion as a function of resources assigned to the time-phased plan, though the fatal error in this approach is to fail to understand that technical achievement and financial value analysis are separate functions that require a bit more computation.

It is useful to understand the original purpose of simulated Monte Carlo analysis.  Nobel physicist Murray Gell-Mann, while working at RAND Corporation (Research and No Development) came up with the method with a team of other physicists (Jess Marcum and Keith Breuckner) to determine the probability of a number coming up from a set of seemingly random numbers.  For a full rendering of the theory and its proof Gell-Mann provides a good overview in his book The Quark and the Jaguar.  The insight derived from the insight of Monte Carlo computation has been to show that systems in the universe often organize themselves into patterns.  Instead of some event being probable by chance, we find that, given all of the events that have occurred to date, that there is some determinism which will yield regularities that can be tracked and predicted.  Thus, the use of simulated Monte Carlo analysis in our nether world of project management, which inhabits that void between microeconomics and business economics, provides us with some transient predictive probabilities given the information stream at that particular time, of the risks that have manifested and are influencing the project.

What the use of Monte Carlo and other such methods in identifying regularities do not do is to determine cause-and-effect.  We attempt to bridge this deficiency with qualitative risk in which we articulate risk factors to handle that are then tied to cost and schedule artifacts.  This is good as far as it goes.  But it seems that we have some of this backward.  Oftentimes, despite the application of these systems to project management, we still fail to overcome the risks inherent in the project, which then require a redefinition of project goals.  We often attribute these failures to personnel systems and there are no amount of consultants all too willing to sell the latest secret answer to project success.  Yet, despite years of such consulting methods applied to many of the same organizations, there is still a fairly consistent rate of failure in properly identifying cause-and-effect.

Cause-and-effect is the purpose of all of our metrics.  Only by properly “computing” cause-and-effect will we pass the “So What?” test.  Our first forays into this area involve modeling.  Given enough data we can model our systems and, when the real-time results of our in-time experiments play out to approximate what actually happens then we know that our models are true.  Both economists and physicists (well, the best ones) use the modeling method.  This allows us to get the answer even if not entirely understanding the question of the internal workings that lead to the final result.  As in Douglas Adams’ answer to the secret of life, the universe, and everything where the answer is “42,” we can at least work backwards.  And oftentimes this is what we are left, which explains the high rate of failure in time.

While I was pondering this reality I came across this article in Quanta magazine outlining the new important work of the MIT physicist Jeremy England entitled “A New Physics Theory of Life.”  From the perspective of evolutionary biology, this pretty much shows that not only does the Second Law of Thermodynamics support the existence and evolution of life (which we’ve known as far back as Schrodinger), but probably makes life inevitable under a host of conditions.  In relation to project management and risk, it was this passage that struck me most forcefully:

“Chris Jarzynski, now at the University of Maryland, and Gavin Crooks, now at Lawrence Berkeley National Laboratory. Jarzynski and Crooks showed that the entropy produced by a thermodynamic process, such as the cooling of a cup of coffee, corresponds to a simple ratio: the probability that the atoms will undergo that process divided by their probability of undergoing the reverse process (that is, spontaneously interacting in such a way that the coffee warms up). As entropy production increases, so does this ratio: A system’s behavior becomes more and more “irreversible.” The simple yet rigorous formula could in principle be applied to any thermodynamic process, no matter how fast or far from equilibrium. “Our understanding of far-from-equilibrium statistical mechanics greatly improved,” Grosberg said. England, who is trained in both biochemistry and physics, started his own lab at MIT two years ago and decided to apply the new knowledge of statistical physics to biology.”

No project is a closed system (just as the earth is not on a larger level).  The level of entropy in the system will vary by the external inputs that will change it:  effort, resources, and technical expertise.  As I have written previously (and somewhat controversially), there is both chaos and determinism in our systems.  An individual or a system of individuals can adapt to the conditions in which they are placed but only to a certain level.  It is non-zero that an individual or system of individuals can largely overcome the risks realized to date, but the probability of that occurring is vanishingly small.  The chance that a peasant will be a president is the same.  The idea that it is possible, even if vanishingly so, keeps the class of peasants in line so that those born with privilege can continue to reassuringly pretend that their success is more than mathematics.

When we measure risk what we are measuring is the amount of entropy in the system that we need to handle, or overcome.  We do this by borrowing energy in the form of resources of some kind from other, external systems.  The conditions in which we operate may be ideal or less than ideal.

What England’s work combined with his predecessors’ seem to suggest is that the Second Law almost makes life inevitable except where it is impossible.  For astrophysics this makes the entire Rare Earth hypothesis a non sequitur.  That is, wherever life can develop it will develop.  The life that does develop is fit for its environment and continues to evolve as changes to the environment occur.  Thus, new forms of organization and structure are found in otherwise chaotic systems as a natural outgrowth of entropy.

Similarly, when we look at more cohesive and less complex systems, such as projects, what we find are systems that adapt and are fit for the environments in which they are conceived.  This insight is not new and has been observed for organizations using more mundane tools, such as Deming’s red bead experiment.  Scientifically, however, we now have insight into the means of determining what the limitations of success are given the risk and entropy that has already been realized, against the needed resources that are needed to bring the project within acceptable ranges of success.  This information goes beyond simply stating the problem, leaving the computing to the person and thus passes the “So What?” test.

More on Excel…the contributing factor of poor Project Management apps

Some early comments via e-mails on my post on why Excel is not a PM tool raised the issue that I was being way too hard on IT shops and letting application providers off the hook.  The asymmetry was certainly not the intention (at least not consciously).

When approaching an organization seeking process and technology improvement, oftentimes the condition of using Excel is what we in the technology/PM industry conveniently call “workarounds.”  Ostensibly these workarounds are temporary measures to address a strategic or intrinsic organizational need that will eventually be addressed by a more cohesive software solution.  In all too many cases, however, the workaround turns out to be semi-permanent.

A case in point in basic project management concerns Work Authorizations Documents (WADs) and Baseline Change Requests (BCRs).  Throughout entire industries who use the most advanced scheduling applications, resource management applications, and–where necessary–earned value “engines,” the modus operandi to address WADs and BCRs is to either use Excel or to write a custom app in FoxPro or using Access.  This is fine as a “workaround” as long as you remember to set up the systems and procedures necessary to keep the logs updated, and then have in place a procedure to update the systems of record appropriately.  Needless to say, errors do creep in and in very dynamic environments it is difficult to ensure that these systems are in alignment, and so a labor-intensive feedback system must also be introduced.

This is the type of issue that software technology was designed to solve.  Instead, software has fenced off the “hard’ operations so that digitized manual solutions, oftentimes hidden from plain view from the team by the physical technological constraint of the computer (PC, laptop, etc.), are used.  This is barely a step above what we did before the introduction of digitization:  post the project plan, milestone achievements, and performance on a VIDS/MAF board that surrounded the PM control office, which ensured that every member of the team could see the role and progress of the project.  Under that system no one hoarded information, it militated against single points of failure, and ensured that disconnects were immediately addressed since visibility ensured accountability.

In many ways we have lost the ability to recreate the PM control office in digitized form.  Part of the reason resides in the 20th century organization of development and production into divisions of labor.  In project management, the specialization of disciplines organized themselves around particular functions: estimating and planning, schedule management, cost management, risk management, resource management, logistics, systems engineering, operational requirements, and financial management, among others.  Software was developed to address each of these areas with clear lines of demarcation drawn that approximated the points of separation among the disciplines.  What the software manufacturers forgot (or never knew) was that the PMO is the organizing entity and it is an interdisciplinary team.

To return to our example: WADs and BCRs; a survey of the leading planning and scheduling applications shows that while their marketing literature addresses baselines and baseline changes (and not all of them address even this basic function), they still do not understand complex project management.  There is a difference between resources assigned to a time-phased network schedule and the resources planned against technical achievement related to the work breakdown structure (WBS).  Given proper integration they should align.  In most cases they do not.  This is why most scheduling application manufacturers who claim to measure earned value, do not.  Their models assume that the expended resources align with the plan to date, in lieu of volume-based measurement.  Further, eventually understanding this concept does not produce a digitized solution, since an understanding of the other specific elements of program control is necessary.

For example, projects are initiated either through internal work authorizations in response to a market need, or based on the requirements of a contract.  Depending on the mix of competencies required to perform the work financial elements such as labor rates, overhead, G&A, allowable margin (depending on contract type), etc. will apply–what is euphemistically called “complex rates.”  An organization may need to manage multiple rate sets based on the types of efforts undertaken, with a many-to-many relationship between rate sets and projects/subprojects.

Once again, the task of establishing the proper relationships at the appropriate level is necessary.  This will then affect the timing of WAD initiation, and will have a direct bearing on the BCR approval process, given that it is heavily influenced by “what-if?” analysis against resource, labor, and financial availability and accountability (a complicated process in itself).  Thus the schedule network is not the only element affected, nor the overarching one, given the assessed impact on cost, technical achievement, and qualitative external risk.

These are but two examples of sub-optimization due to deficiencies in project management applications.  The response–and in my opinion a lazy one (or one based on the fact that oftentimes software companies know nothing of their customers’ operations)–has been to develop the alternative euphemism for “workaround”: best of breed.  Oftentimes this is simply a means of collecting revenue for a function that is missing from the core application.  It is the software equivalent of division of labor: each piece of software performs functions relating to specific disciplines and where there are gaps these are filled by niche solutions or Excel.  What this approach does not do is meet the requirements of the PMO control office, since it perpetuates application “swim lanes,” with the multidisciplinary requirements of project management relegated to manual interfaces and application data reconciliation.  It also pushes–and therefore magnifies–risk at the senior level of the project management team, effectively defeating organizational fail safes designed to reduce risk through, among other methods, delegation of responsibility to technical teams, and project planning and execution constructed around short duration/work-focused activities.  It also reduces productivity, information credibility, and unnecessarily increases cost–the exact opposite of the rationale used for investing in software technology.

It is time for this practice to end.  Technologies exist today to remove application “swim lanes” and address the multidisciplinary needs of successful project management.  Excel isn’t the answer; cross-application data access, proper data integration, and data processing into user-directed intelligence, properly aggregated and distributed based on role and optimum need to know, is.