River Deep, Mountain High — A Matrix of Project Data

Been attending conferences and meetings of late and came upon a discussion of the means of reducing data streams while leveraging Moore’s Law to provide more, better data.  During a discussion with colleagues over lunch they asked if asking for more detailed data would provide greater insight.  This led to a discussion of the qualitative differences in data depending on what information is being sought.  My response to more detailed data was to respond: “well there has to be a pony in there somewhere.”  This was greeted by laughter, but then I finished the point: more detailed data doesn’t necessarily yield greater insight (though it could and only actually looking at it will tell you that, particularly in applying the principle of KDD).  But more detailed data that is based on a hierarchical structure will, at the least, provide greater reliability and pinpoint areas of intersection to detect areas of risk manifestation that is otherwise averaged out–and therefore hidden–at the summary levels.

Not to steal the thunder of new studies that are due out in the area of data later this spring but, for example, I am aware after having actually achieved lowest level integration for extremely complex projects through my day job, that there is little (though not zero) insight gained in predictive power between say, the control account level of a WBS and the work package level.  Going further down to element of cost may, in the words of the character in the movie Still Alice, where “You may say that this falls into the great academic tradition of knowing more and more about less and less until we know everything about nothing.”  But while that may be true for project management, that isn’t necessarily so when collecting parametrics and auditing the validity of financial information.

Rolling up data from individually detailed elements of a hierarchy is the proper way to ensure credibility.  Since we are at the point where a TB of data has virtually the same marginal cost of a GB of data (which is vanishingly small to begin with), then the more the merrier in eliminating the abuse associated with human-readable summary reporting.  Furthermore, I have long proposed through this blog and elsewhere, that the emphasis should be away from people, process, and tools, to people, process, and data.  This rightly establishes the feedback loop necessary for proper development and project management.  More importantly, the same data available through project management processes satisfy the different purposes of domains both within the organization, and of multiple external stakeholders.

This then leads us to the concept of integrated project management (IPM), which has become little more than a buzz-phrase, and receives a lot of hand waves, mostly by technology companies that want to push their tools–which are quickly becoming obsolete–while appearing forward leaning.  This tool-centric approach is nothing more than marketing–focusing on what the software manufacturer would have us believe is important based on the functionality baked into their applications.  One can see where this could be a successful approach, given the emphasis on tools in the PM triad.  But, of course, it is self-limiting in a self-interested sort of way.  The emphasis needs to be on the qualitative and informative attributes of available data–not of tool functionality–that meet the requirements of different data consumers while minimizing, to the extent possible, the number of data streams.

Thus, there are at least two main aspects of data that are important in understanding the utility of project management: early warning/predictiveness and credibility/traceability/fidelity.  The chart attached below gives a rough back-of-the-envelope outline of this point, with some proposed elements, though this list is not intended to be exhaustive.

PM Data Matrix

PM Data Matrix

In order to capture data across the essential elements of project management, our data must demonstrate both a breadth and depth that allows for the discovery of intersections of the different elements.  The weakness in the two-dimensional model above is that it treats each indicator by itself.  But, when we combine, for example, IMS consecutive slips with other elements listed, the informational power of the data becomes many times greater.  This tells us that the weakness in our present systems is that we treat the data as a continuity between autonomous elements.  But we know that the project consists of discontinuities where the next level of achievement/progress is a function of risk.  Thus, when we talk about IPM, the secret is in focusing on data that informs us what our systems are doing.  This will require more sophisticated types of modeling.

Over at AITS.org — Red Queen Race: Why Fast Tracking a Project is Not in Your Control

“Well, in our country,” said Alice, still panting a little, “you’d generally get to somewhere else—if you run very fast for a long time, as we’ve been doing.”

“A slow sort of country!” said the Queen. “Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast, as that!”Through the Looking-Glass and What Alice Found There, Chapter 2, Lewis Carroll

There have been a number of high profile examples in the news over the last two years concerning project management.  For example, the initial rollout of the Affordable Care Act marketplace web portal was one of these, and the causes for its faults are still being discussed. As I write this, an article in the New York Times indicates that the fast track efforts to create an Ebola vaccine are faltering…

To read the remainder of this post please to go to this link.

Out of Winter Woodshedding — Thinking about Project Risk and passing the “So What?” test

“Woodshedding” is a slang term in music, particularly in relation to jazz, in which the musician practices on an instrument usually outside of public performance, the purpose of which is to explore new musical insights without critical judgment.  This can be done with or without the participation of other musicians.  For example, much attention recently has been given to Bob Dylan’s Basement Tapes release.  Usually it is unusual to bother recording such music, given the purpose of improvisation and exploration, and so few additional examples of “basement tapes” exist from other notable artists.

So for me the holiday is a sort of opportunity to do some woodshedding.  The next step is to vet such thoughts on informal media, such as this blog, where the high standards involved in white and professional papers do not allow for informal dialogue and exchange of information, and thoughts are not yet fully formed and defensible.  My latest mental romps have been inspired by the movie about Alan Turing–The Imitation Game–and the British series The Bletchley Circle.  Thinking about one of the fathers of modern computing reminded me that the first use of the term “computer” referred to people.

As a matter of fact, though the terminology now refers to the digital devices that have insinuated themselves into every part of our lives, people continue to act as computers.  Despite fantastical fears surrounding AI taking our jobs and taking over the world, we are far from the singularity.  Our digital devices can only be programmed to go so far.  The so-called heuristics in computing today are still hard-wired functions, similar to replicating the methods used by a good con artist in “reading” the audience or the mark.  With the new technology in dealing with big data we have the ability to many of the methods originated by the people in the real life Bletchley Park of the Second World War.  Still, even with refinements and advances in the math, they provide great external information regarding the patterns and probable actions of the objects of the data, but very little insight into the internal cause-and-effect that creates the data, which still requires human intervention, computation, empathy, and insight.

Thus, my latest woodshedding has involved thinking about project risk.  The reason for this is the emphasis recently on the use of simulated Monte Carlo analysis in project management, usually focused on the time-phased schedule.  Cost is also sometimes included in this discussion as a function of resources assigned to the time-phased plan, though the fatal error in this approach is to fail to understand that technical achievement and financial value analysis are separate functions that require a bit more computation.

It is useful to understand the original purpose of simulated Monte Carlo analysis.  Nobel physicist Murray Gell-Mann, while working at RAND Corporation (Research and No Development) came up with the method with a team of other physicists (Jess Marcum and Keith Breuckner) to determine the probability of a number coming up from a set of seemingly random numbers.  For a full rendering of the theory and its proof Gell-Mann provides a good overview in his book The Quark and the Jaguar.  The insight derived from the insight of Monte Carlo computation has been to show that systems in the universe often organize themselves into patterns.  Instead of some event being probable by chance, we find that, given all of the events that have occurred to date, that there is some determinism which will yield regularities that can be tracked and predicted.  Thus, the use of simulated Monte Carlo analysis in our nether world of project management, which inhabits that void between microeconomics and business economics, provides us with some transient predictive probabilities given the information stream at that particular time, of the risks that have manifested and are influencing the project.

What the use of Monte Carlo and other such methods in identifying regularities do not do is to determine cause-and-effect.  We attempt to bridge this deficiency with qualitative risk in which we articulate risk factors to handle that are then tied to cost and schedule artifacts.  This is good as far as it goes.  But it seems that we have some of this backward.  Oftentimes, despite the application of these systems to project management, we still fail to overcome the risks inherent in the project, which then require a redefinition of project goals.  We often attribute these failures to personnel systems and there are no amount of consultants all too willing to sell the latest secret answer to project success.  Yet, despite years of such consulting methods applied to many of the same organizations, there is still a fairly consistent rate of failure in properly identifying cause-and-effect.

Cause-and-effect is the purpose of all of our metrics.  Only by properly “computing” cause-and-effect will we pass the “So What?” test.  Our first forays into this area involve modeling.  Given enough data we can model our systems and, when the real-time results of our in-time experiments play out to approximate what actually happens then we know that our models are true.  Both economists and physicists (well, the best ones) use the modeling method.  This allows us to get the answer even if not entirely understanding the question of the internal workings that lead to the final result.  As in Douglas Adams’ answer to the secret of life, the universe, and everything where the answer is “42,” we can at least work backwards.  And oftentimes this is what we are left, which explains the high rate of failure in time.

While I was pondering this reality I came across this article in Quanta magazine outlining the new important work of the MIT physicist Jeremy England entitled “A New Physics Theory of Life.”  From the perspective of evolutionary biology, this pretty much shows that not only does the Second Law of Thermodynamics support the existence and evolution of life (which we’ve known as far back as Schrodinger), but probably makes life inevitable under a host of conditions.  In relation to project management and risk, it was this passage that struck me most forcefully:

“Chris Jarzynski, now at the University of Maryland, and Gavin Crooks, now at Lawrence Berkeley National Laboratory. Jarzynski and Crooks showed that the entropy produced by a thermodynamic process, such as the cooling of a cup of coffee, corresponds to a simple ratio: the probability that the atoms will undergo that process divided by their probability of undergoing the reverse process (that is, spontaneously interacting in such a way that the coffee warms up). As entropy production increases, so does this ratio: A system’s behavior becomes more and more “irreversible.” The simple yet rigorous formula could in principle be applied to any thermodynamic process, no matter how fast or far from equilibrium. “Our understanding of far-from-equilibrium statistical mechanics greatly improved,” Grosberg said. England, who is trained in both biochemistry and physics, started his own lab at MIT two years ago and decided to apply the new knowledge of statistical physics to biology.”

No project is a closed system (just as the earth is not on a larger level).  The level of entropy in the system will vary by the external inputs that will change it:  effort, resources, and technical expertise.  As I have written previously (and somewhat controversially), there is both chaos and determinism in our systems.  An individual or a system of individuals can adapt to the conditions in which they are placed but only to a certain level.  It is non-zero that an individual or system of individuals can largely overcome the risks realized to date, but the probability of that occurring is vanishingly small.  The chance that a peasant will be a president is the same.  The idea that it is possible, even if vanishingly so, keeps the class of peasants in line so that those born with privilege can continue to reassuringly pretend that their success is more than mathematics.

When we measure risk what we are measuring is the amount of entropy in the system that we need to handle, or overcome.  We do this by borrowing energy in the form of resources of some kind from other, external systems.  The conditions in which we operate may be ideal or less than ideal.

What England’s work combined with his predecessors’ seem to suggest is that the Second Law almost makes life inevitable except where it is impossible.  For astrophysics this makes the entire Rare Earth hypothesis a non sequitur.  That is, wherever life can develop it will develop.  The life that does develop is fit for its environment and continues to evolve as changes to the environment occur.  Thus, new forms of organization and structure are found in otherwise chaotic systems as a natural outgrowth of entropy.

Similarly, when we look at more cohesive and less complex systems, such as projects, what we find are systems that adapt and are fit for the environments in which they are conceived.  This insight is not new and has been observed for organizations using more mundane tools, such as Deming’s red bead experiment.  Scientifically, however, we now have insight into the means of determining what the limitations of success are given the risk and entropy that has already been realized, against the needed resources that are needed to bring the project within acceptable ranges of success.  This information goes beyond simply stating the problem, leaving the computing to the person and thus passes the “So What?” test.

Mo’Better Risk — Tournaments and Games of Failure Part II

My last post discussed economic tournaments and games of failure in how they describe the success and failure of companies, with a comic example for IT start-up companies.  Glen Alleman at his Herding Cats blog has a more serious response in handily rebutting those who believe that #NoEstimates, Lean, Agile, and other cult-like fads can overcome the bottom line, that is, apply a method to reduce inherent risk and drive success.  As Glen writes:

“It’s about the money. It’s always about the money. Many want it to be about them or their colleagues, or the work environment, or the learning opportunities, or the self actualization.” — Glen Alleman, Herding Cats

Perfectly good products and companies fail all the time.  Oftentimes the best products fail to win the market, or do so only fleetingly.  Just think of the roles of the dead (or walking dead) over the years:  Novell, WordPerfect, Visicalc, Harvard Graphics; the list can go on and on.  Thus, one point that I would deviate from Glen is that it is not always EBITDA.  If that were true then both Facebook and Amazon would not be around today.  We see tremendous payouts to companies with promising technologies acquired for outrageous sums of money, though they have yet to make a profit.  But for every one of these there are many others that see the light of day for a moment and then flicker out of existence

So what is going on and how does this inform our knowledge of project management?  For the measure of our success is time and money, in most cases.  Obviously not all cases.  I’ve given two cases of success that appeared to be failure in previous posts to this blog: the M1A1 Tank and the ACA.  The reason why these “failures” were misdiagnosed was that the agreed measure(s) of success were incorrect.  Knowing this difference, where, and how it applies is important.

So how do tournaments and games of failure play a role in project management?  I submit that the lesson learned from these observations is that we see certain types of behaviors that are encouraged that tend to “bake” certain risks into our projects.  In high tech we know that there will be a thousand failures for every success, but it is important to keep the players playing–at least it is in the interest of the acquiring organization to do so, and is in the public interest in many cases as well.  We also know that most IT projects by most measures–both contracted out and organic–tend to realize a high rate of failure.  But if you win an important contract or secure an important project, the rewards can be significant.

The behaviors that are reinforced in this scenario on the part of the competing organization is to underestimate the cost and time involved in the effort; that is, so-called “bid to win.”  On the acquiring organization’s part, contracting officers lately have been all too happy to award contracts they know to be too low (and normally out of the competitive range) even though they realize it to be significantly below the independent estimate.  Thus “buying in” provides a significant risk that is hard to overcome.

Other behaviors that we see given the project ecosystem are the bias toward optimism and requirements instability.

In the first case, bias toward optimism, we often hear project and program managers dismiss bad news because it is “looking in the rear view mirror.”  We are “exploring,” we are told, and so the end state will not be dictated by history.  We often hear a version of this meme in cases where those in power wish to avoid accountability.  “Mistakes were made” and “we are focused on the future” are attempts to change the subject and avoid the reckoning that will come.  In most cases, however, particularly in project management, the motivations are not dishonest but, instead, sociological and psychological.  People who tend to build things–engineers in general, software coders, designers, etc.–tend to be an optimistic lot.  In very few cases will you find one of them who will refuse to take on a challenge.  How many cases have we presented a challenge to someone with these traits and heard the refrain:  “I can do that.”?  This form of self-delusion can be both an asset and a risk.  Who but an optimist would take on any technically challenging project?  But this is also the trait that will keep people working to the bitter end in a failure that places the entire enterprise at risk.

I have already spent some bits in previous posts regarding the instability of requirements, but this is part and parcel of the traits that we see within this framework.  Our end users determine that given how things are going we really need additional functionality, features, or improvements prior to the product roll out.  Our technical personnel will determine that for “just a bit more effort” they can achieve a higher level of performance or add capabilities at marginal or tradeoff cost.  In many cases, given the realization that the acquisition was a buy-in, project and program managers allow great latitude in accepting as a change an item that was assumed to be in the original scope.

There is a point where one or more of these factors is “baked in” into the course that the project will take.  We can delude ourselves into believing that we can change the course of the trajectory of the system through the application of methods: Agile, Lean, Six Sigma, PMBOK, etc. but, in the end, if we exhaust our resources without a road map on how to do this we will fail.  Our systems must be powerful and discrete enough to note the trend that is “baked in” due to factors in the structure and architecture of the effort being undertaken.  This is the core risk that must be managed in any undertaking.  A good example that applies to a complex topic like Global Warming was recently illustrated by Neil deGrasse Tyson in the series Cosmos:

In this example Dr. Tyson is climate and the dog is the weather.  But in our own analogy Dr. Tyson can be the trajectory of the system with the dog representing the “noise” of periodic indicators and activity around the effort.  We often spend a lot of time and effort (which I would argue is largely unproductive) on influencing these transient conditions in simpler systems rather than on the core inertia of the system itself.  That is where the risk lies. Thus, not all indicators are the same.  Some are measuring transient anomalies that have nothing to do with changing the core direction of the system, others are more valuable.  These latter indicators are the ones that we need to cultivate and develop, and they reside in an initial measurement of the inherent risk of the system largely based on its architecture that is antecedent to the start of the work.

This is not to say that we can do nothing about the trajectory.  A simpler system can be influenced more easily.  We cannot recover the effort already expended–which is why even historical indicators are important.  It is because they inform our future expectations and, if we pay attention to them, they keep us grounded in reality.  Even in the case of Global Warming we can change, though gradually, what will be a disastrous result if we allow things to continue on their present course.  In a deterministic universe we can influence the outcomes based on the contingent probabilities presented to us over time.  Thus, we will know if we have handled the core risk of the system by focusing on these better indicators as the effort progresses.  This will affect its trajectory.

Of course, a more direct way of modifying these risks is to make systemic adjustments.  Do we really need a tournament-based system as it exists and is the waste inherent in accepting so much failure really necessary?  What would that alternative look like?

I Can’t Get No (Satisfaction) — When Software Tools Go Bad

Another article I came across a couple of weeks ago that my schedule prevented me from highlighting was by Michelle Symonds at PM Hut entitled “5 Tell-Tale Signs That You Need a Better Project Management Tool.”  According to Ms. Symonds, among these signs are:

a.  Additional tools are needed to achieve the intended functionality apart from the core application;

b.  Technical support is poor or nonexistent;

c.  Personnel in the organization still rely on spreadsheets to extend the functionality of the application;

d.  Training on the tool takes more time than training the job;

e.  The software tool adds work instead of augmenting or facilitating the achievement of work.

I have seen situations where all of these conditions are at work but the response, in too many cases, has been “well we put so much money into XYZ tool with workarounds and ‘bolt-ons’ that it will be too expensive/disruptive to change.”  As we have advanced past the first phases of digitization of data, it seems that we are experiencing a period where older systems do not quite match up with current needs, but that software manufacturers are very good at making their products “sticky,” even when their upgrades and enhancements are window dressing at best.

In addition, the project management community, particularly focused on large projects in excess of $20M is facing the challenge of an increasingly older workforce.  Larger economic forces at play lately have exacerbated this condition.  Aggregate demand and, on the public side, austerity ideology combined with sequestration, has created a situation where highly qualified people are facing a job market characterized by relatively high unemployment, flat wages and salaries, depleted private retirement funds, and constant attacks on social insurance related to retirement.  Thus, people are hanging around longer, which limits opportunities for newer workers to grow into the discipline.  Given these conditions, we find that it is very risky to one’s employment prospects to suddenly forge a new path.  People in the industry that I have known for many years–and who were always the first to engage with new technologies and capabilities–are now very hesitant to do so now.  Some of this is well founded through experience and consists of healthy skepticism: we all have come across snake oil salesmen in our dealings at one time or another, and even the best products do not always make it due to external forces or the fact that brilliant technical people oftentimes are just not very good at business.

But these conditions also tend to hold back the ability of the enterprise to implement efficiencies and optimization measures that otherwise would be augmented and supported by appropriate technology.  Thus, in addition to those listed by Ms. Symonds, I would include the following criteria to use in making the decision to move to a better technology:

a.  Sunk and prospective costs.  Understand and apply the concepts of sunk cost and prospective cost.  The first is the cost that has been expended in the past, while the latter focuses on the investment necessary for future growth, efficiencies, productivity, and optimization.  Having made investments to improve a product in the past is not an argument for continuing to invest in the product in the future that trumps other factors.  Obviously, if the cash flow is not there an organization is going to be limited in the capital and other improvements it can make but, absent those considerations, sunk cost arguments are invalid.  It is important to invest in those future products that will facilitate the organization achieving its goals in the next five or ten years.

b.  Sustainability.  The effective life of the product must be understood, particularly as it applies to an organization’s needs.  Some of this overlaps the points made by Ms. Symonds in her article but is meant to apply in a more strategic way.  Every product, even software, has a limited productive life but my concept here goes to what Glen Alleman pointed out in his blog as “bounded applicability.”  Will the product require more effort in any form where the additional effort provides a diminishing return?  For example, I have seen cases where software manufacturers, in order to defend market share, make trivial enhancements such as adding a chart or graph in order to placate customer demands.  The reason for this should be, but is not always obvious.  Oftentimes more substantive changes cannot be made because the product was built on an earlier generation operating environment or structure.  Thus, in order to replicate the additional functionality found by newer products the application requires a complete rewrite.  All of us operating in this industry has seen this; where a product that has been a mainstay for many years begins to lose market share.  The decision, when it is finally made, is to totally reengineer the solution, but not as an upgrade to the original product, arguing that it is a “new” product.  This is true in terms of the effort necessary to keep the solution viable, but that then also completely undermines justifications based on sunk costs.

c.  Flexibility.  As stated previously in this blog, the first generation of digitization mimicked those functions that were previously performed manually.  The applications were also segmented and specialized based on traditional line and staff organizations, and specialties.  Thus, for project management, we have scheduling applications for the scheduling discipline (such as it is), earned value engines for the EV discipline, risk and technical performance applications for risk specialists and systems engineers, analytical software for project and program analysts, and financial management applications that subsumed project and program management financial management professionals.  This led to the deployment of so-called best-of-breed configurations, where a smorgasbord of applications or modules were acquired to meet the requirements of the organization.  Most often these applications had and have no direct compatibility, requiring entire staffs to reconcile data after the fact once that data was imported into a proprietary format in which it could be handled.  Even within so-called ERP environments under one company, direct compatibility at the appropriate level of the data being handled escaped the ability of the software manufacturers, requiring “bolt-ons” and other workarounds and third party solutions.  This condition undermines sustainability, adds a level of complexity that is hard to overcome, and adds a layer of cost to the life-cycle of the solutions being deployed.

The second wave to address some of these limitations focused on data flexibility using cubes, hard-coding of relational data and mapping, and data mining solutions: so-called Project Portfolio Management (PPM) and Business Intelligence (BI).  The problem is that, in the first instance, PPM simply another layer to address management concerns, while early BI systems froze in time single points of failure into hard-coded deployed solutions.

A flexible system is one that leverages the new advances in software operating environments to solve more than one problem.  This, of course, undermines the financial returns in software, where the pattern has been to build one solution to address one problem based on a specialty.  Such a system provides internal flexibility, that is, allows for the application of objects and conditional formatting without hardcoding, pushing what previously had to be accomplished by coders to the customer’s administrator or user level; and external flexibility, where the same application can address, say, EVM, schedule, risk, financial management, KPIs, technical performance, stakeholder reporting, all in the same or in multiple deployed environments without the need for hardcoding.  In this case the operating environment and any augmented code provides a flexible environment to the customer that allows one solution to displace multiple “best-of-breed” applications.

This flexibility should apply not only vertically but also horizontally, where data can be hierarchically organized to allow not only for drill-down, but also for roll-up.  Data in this environment is exposed discretely, providing to any particular user that data, aggregated as appropriate, based on their role, responsibility, or need to know.

d.  Interoperability and open compatibility.  A condition of the “best-of-breed” deployment environment is that it allows for sub-optimization to trump organizational goals.  The most recent example that I have seen of this is where it is obvious that the Integrated Master Schedule (IMS) and Performance Management Baseline (PMB) were obviously authored by different teams in different locations and, most likely, were at war with one another when they published these essential interdependent project management artifacts.

But in terms of sustainability, the absence of interoperability and open compatibility has created untenable situations.  In the example of PMB and IMS information above, in many cases a team of personnel must be engaged every month to reconcile the obvious disconnectedness of schedule activities to control accounts in order to ensure traceability in project management and performance.  Surely, not only should there be no economic rewards for such behavior, I believe that no business would perform in that manner without them.

Thus, interoperability in this case is to be able to deal with data in its native format without proprietary barriers that prevent its full use and exploitation to the needs and demands of the customer organization.  Software that places its customers in a corner and ties their hands in using their own business information has, indeed, worn out its welcome.

The reaction of customer organizations to the software industry’s attempts to bind them to proprietary solutions has been most marked in the public sector, and most prominently in the U.S. Department of Defense.  In the late 1990s the first wave was to ensure that performance management data centered around earned value was submitted in a non-proprietary format known as the ANSI X12 839 transaction set.  Since that time DoD has specified the use of the UN/CEFACT XML D09B standard for cost and schedule information, and it appears that other, previously stove-piped data will be included in that standard in the future.  This solution requires data transfer, but it is one that ensures that the underlying data can be normalized regardless of the underlying source application.  It is especially useful for stakeholder reporting situations or data sharing in prime and sub-contractor relationships.

It is also useful for pushing for improvement in the disciplines themselves, driving professionalism.  For example, in today’s project management environment, while the underlying architecture of earned value management and risk data is fairly standard, reflecting a cohesiveness of practice among its practitioners, schedule data tends to be disorganized with much variability in how common elements are kept and reported.  This also reflects much of the state of the scheduling discipline, where an almost “anything goes” mentality seems to be in play reflecting not so much the realities of scheduling practice–which are pretty well established and uniform–as opposed to the lack of knowledge and professionalism on the part of schedulers, who are tied to the limitations and vagaries of their scheduling application of choice.

But, more directly, interoperability also includes the ability to access data (as opposed to application interfacing, data mining, hard-coded Cubes, and data transfer) regardless of the underlying database, application, and structured data source.  Early attempts to achieve interoperability and open compatibility utilized ODBC but newer operating environments now leverage improved OLE DB and other enhanced methods.  This ability, properly designed, also allows for the deployment of transactional environments, in which two-way communication is possible.

A new reality.  Thus given these new capabilities, I think that we are entering a new phase in software design and deployment, where the role of the coder in controlling the UI is reduced.  In addition, given that the large software companies have continued to support a system that ties customers to proprietary solutions, I do not believe that the future of software is in open source as so many prognosticators stated just a few short years ago.  Instead, I propose that applications that behave like open source but allow those who innovate and provide maximum value, sustainability, flexibility, and interoperability to the customer are those that will be rewarded for their efforts.

Note:  This post was edited for clarity and grammatical errors from the original.

 

Mo’Better Risk — Tournaments and Games of Failure Part II

My last post discussed economic tournaments and games of failure in how they describe the success and failure of companies, with an comic example for IT start-up companies.  Glen Alleman at his Herding Cats blog has a more serious response in handily rebutting those who believe that #NoEstimates, Lean, Agile, and other cult-like fads can overcome the bottom line, that is, apply a method to reduce inherent risk and drive success.  As Glen writes:

“It’s about the money. It’s always about the money. Many want it to be about them or their colleagues, or the work environment, or the learning opportunities, or the self actualization.” — Glen Alleman, Herding Cats

Perfectly good products and companies fail all the time.  Oftentimes the best products fail to win the market, or do so only fleetingly.  Just think of the roles of the dead (or walking dead) over the years:  Novell, WordPerfect, Visicalc, Harvard Graphics; the list can go on and on.  Thus, one point that I would deviate from Glen is that it is not always EBITDA.  If that were true then both Facebook and Amazon would not be around today.  We see tremendous payouts to companies with promising technologies acquired for outrageous sums of money, though they have yet to make a profit.  But for every one of these there are many others that see the light of day for a moment and then flicker out of existence

So what is going on and how does this inform our knowledge of project management?  For the measure of our success is time and money, in most cases.  Obviously not all cases.  I’ve given two cases of success that appeared to be failure in previous posts to this blog: the M1A1 Tank and the ACA.  The reason why these “failures” were misdiagnosed was that the agreed measure(s) of success were incorrect.  Knowing this difference, where, and how it applies is important.

So how do tournaments and games of failure play a role in project management?  I submit that the lesson learned from these observations is that we see certain types of behaviors that are encouraged that tend to “bake” certain risks into our projects.  In high tech we know that there will be a thousand failures for every success, but it is important to keep the players playing–at least it is in the interest of the acquiring organization to do so, and is in the public interest in many cases as well.  We also know that most IT projects by most measures–both contracted out and organic–tend to realize a high rate of failure.  But if you win an important contract or secure an important project, the rewards can be significant.

The behaviors that are reinforced in this scenario on the part of the competing organization is to underestimate the cost and time involved in the effort; that is, so-called “bid to win.”  On the acquiring organization’s part, contracting officers lately have been all too happy to award contracts they know to be too low (and normally out of the competitive range) even though they realize it to be significantly below the independent estimate.  Thus “buying in” provides a significant risk that is hard to overcome.

Other behaviors that we see given the project ecosystem are the bias toward optimism and requirements instability.

In the first case, bias toward optimism, we often hear project and program managers dismiss bad news because it is “looking in the rear view mirror.”  We are “exploring,” we are told, and so the end state will not be dictated by history.  We often hear a version of this meme in cases where those in power wish to avoid accountability.  “Mistakes were made” and “we are focused on the future” are attempts to change the subject and avoid the reckoning that will come.  In most cases, however, particularly in project management, the motivations are not dishonest but, instead, sociological and psychological.  People who tend to build things–engineers in general, software coders, designers, etc.–tend to be an optimistic lot.  In very few cases will you find one of them who will refuse to take on a challenge.  How many cases have we presented a challenge to someone with these traits and heard the refrain:  “I can do that.”?  This form of self-delusion can be both an asset and a risk.  Who but an optimist would take on any technically challenging project?  But this is also the trait that will keep people working to the bitter end in a failure that places the entire enterprise at risk.

I have already spent some bits in previous posts regarding the instability of requirements, but this is part and parcel of the traits that we see within this framework.  Our end users determine that given how things are going we really need additional functionality, features, or improvements prior to the product roll out.  Our technical personnel will determine that for “just a bit more effort” they can achieve a higher level of performance or add capabilities at marginal or tradeoff cost.  In many cases, given the realization that the acquisition was a buy-in, project and program managers allow great latitude in accepting as a change an item that was assumed to be in the original scope.

There is a point where one or more of these factors is “baked in” into the course that the project will take.  We can delude ourselves into believing that we can change the course of the trajectory of the system through the application of methods: Agile, Lean, Six Sigma, PMBOK, etc. but, in the end, if we exhaust our resources without a road map on how to do this we will fail.  Our systems must be powerful and discrete enough to note the trend that is “baked in” due to factors in the structure and architecture of the effort being undertaken.  This is the core risk that must be managed in any undertaking.  A good example that applies to a complex topic like Global Warming was recently illustrated by Neil deGrasse Tyson in the series Cosmos:

In this example Dr. Tyson is climate and the dog is the weather.  But in our own analogy Dr. Tyson can be the trajectory of the system with the dog representing the “noise” of periodic indicators and activity around the effort.  We often spend a lot of time and effort (which I would argue is largely unproductive) on influencing these transient conditions in simpler systems rather than on the core inertia of the system itself.  That is where the risk lies. Thus, not all indicators are the same.  Some are measuring transient anomalies that have nothing to do with changing the core direction of the system, others are more valuable.  These latter indicators are the ones that we need to cultivate and develop, and they reside in an initial measurement of the inherent risk of the system largely based on its architecture that is antecedent to the start of the work.

This is not to say that we can do nothing about the trajectory.  A simpler system can be influenced more easily.  We cannot recover the effort already expended–which is why even historical indicators are important.  It is because they inform our future expectations and, if we pay attention to them, they keep us grounded in reality.  Even in the case of Global Warming we can change, though gradually, what will be a disastrous result if we allow things to continue on their present course.  In a deterministic universe we can influence the outcomes based on the contingent probabilities presented to us over time.  Thus, we will know if we have handled the core risk of the system by focusing on these better indicators as the effort progresses.  This will affect its trajectory.

Of course, a more direct way of modifying these risks is to make systemic adjustments.  Do we really need a tournament-based system as it exists and is the waste inherent in accepting so much failure really necessary?  What would that alternative look like?

Gimme All Your (Money) — Agile and the Intrinsic Evil of #NoEstimates

Over the years I’ve served as a project, acquisition, and contracts specialist in both public service and private industry.  Most of those assignments involved the introduction of digital technology, from the earliest days of the introduction of what were called mini-computers, through the introduction of the PC, to the various digital devices, robotics, and artificial intelligence that we use today.

A joke I often encountered over the years was that if you asked a software programmer what his solution could do the response all too often was: “what would you like it to do?”  The point of the joke, which has more than a grain of truth in it, is that programmers do not live in (or would prefer not live in) the world of finite resources and often fall into the trap of excessive optimism.  That this is backed by empirical evidence has been discussed previously in this blog, where over 90% of software projects in both private industry and public organizations either fail outright or fail to meet expectations.  This pattern of failure is pervasive regardless of the method of development used: waterfall, spiral, or–the latest rage–Agile.

Agile is a break from the principles of scientific management upon which previous methodologies were based.  As such, it learns no lessons from the past, much as a narcissist rejects the contributions of others.  It is not that all of the ideas that were espoused in the original Agile manifesto in 2001–or those since–are necessarily invalid or may not be good ideas in modifying and improving previous practices, it is that they are based on a declaration without attribution to evidence.  As such, Agile has all of the markings of a cult: an ideology of management that brooks no deviation and which is resistant to evidence.  Faced with contrary evidence the reaction is to double down and push the envelope further.

The latest example of this penchant is by Neil Killick in his post “Beyond #NoEstimates — Why the traditional software contract must die.”  It is worth a read but, in the end, the thrust of the post is to state that contracts enforce accountability and the followers of the Agile Cult don’t want that because, well, there is all of that planning, scheduling, budgeting, and reporting that gets in the way of delivering “value.”  The flaw in the prescriptions of the Cult, particularly its latest #NoEstimates offshoot, has been adequately and thoughtfully documented by many well-respected practitioners of the art of project management such as that by Dave Gordon and others and I will not revisit them here.  Instead, I will focus on Mr. Killick’s article in question.

First, Mr. Killick argues that “value” cannot be derived from the plethora of “traditional” software contracts.  His ignorance of contracting is most clear here for he doesn’t define his terms.  What is a “traditional” software contract?  As a former contract negotiator and contracting officer, I find nothing called a “traditional” software contract in the contracting lexicon.  There are firm fixed price contracts, cost plus type contracts, time and materials/labor hour contracts, etc. but no “traditional” contracts.  For developmental efforts some variation of the cost-plus contract is usually appropriate, but the contract type and structure must be such that it provides sufficient incentives for “value” that exceeds the basic requirements of the customer, the type of effort, the risk involved, and the resource and time constraints involved in the effort.  The scope can be defined by specific line item specifications or a performance specification.  Thus, contrary to the impression left in the post, quite a bit of freedom is allowed within a contract and R&D projects under various contract types have been succeeding for quite a long time.  In addition, the use of the term “traditional” seems to have a certain dog-whistle quality about it for the Cult with its use going back to the original manifesto.  This then, at least to those recognizing the whistle, is a loaded word that leads to an argument that assumes its conclusion: that such contracts lead to poor results, which is not true (assuming a firm definition of “traditional” could be provided) and for which there is sufficient evidence.

Second, another of Mr. Killick’s assumptions is that “traditional” contracts (whatever those are) start from a position of distrust.  In his words: “Working agreements that embrace“Here’s what you must deliver or we’ll sue you”.(sic).”  Once again Mr. Killick demonstrates his ignorance.  The comments and discussion at the end of his post reinforce a narrative that it’s all the lawyers.  I do have a number of friends who are attorneys and my contempt for the frequent excesses of the legal profession is well known to them.  But there is a difference between a contract specialist and a lawyer and it is best summed up in a concept and an anecdote.

The concept is the basic description of a contract which, at its most simple definition, is a promise for a promise.  Usually this takes the form of a promise to perform in return for a promise to pay, since the promise must be sufficient and involve consideration of value in order to establish a contract.  It is not a promise to pay based on a contingent lack of a promise to perform unless, of course, the software developer is willing to allow the contingent nature of the promise to work both ways.  That is, we’ll allow you to expend effort to try to satisfy our needs and we’ll pay you if it is of value at a price that we think your product is worth.  It is not a contract in that case but, at least, both parties know their risks.  The promise for a promise–the rise of the concept of the contract–is in many ways the basis for civilization.  Its rise coincided with and reinforced civil society, and established ground rules for the conduct of human affairs that replaced the contingent nature of relationships between individuals.  Without such ground rules, trust is not possible.

The anecdote explains the aim of a contract and why it is not a lawyer’s game.  This aim was explained to me by one of my closest friends, who is an attorney.  He said: “the difference between you, a contract negotiator, and me, an attorney is that when I come out of the room I know I have done my job when all of the parties are unhappy.  You know you have done your job when all of the parties come out of the room happy.”  Thus, Mr. Killick gets contracting backwards.  The basis of this insightful perspective is based on the different roles of an attorney and a contract negotiator.  An attorney is trained and educated to vehemently defend the interests of its client.  The attorney realizes that he or she engages in a zero-sum game.  The clash of attorneys on opposing sides will usually result in an outcome where neither side feels fully satisfied.  The aim of the contract negotiator (at least the most successful and effective ones) is to determine the goals and acceptable terms for both parties and to find the common ground so that the relationship will proceed under an atmosphere of trust and cooperation.

The most common contract in which many parties engage is the marriage contract.  Such an arrangement can be viewed as an unfortunate obligation that hinders creativeness and acceptance of change, one established by lawyers to enforce the terms of the agreement or else.  But many find that it is a basis for trust and stability, where growth and change are fostered rather than hindered.  In real life, of course, this is a false dilemma.  For most people the arrangement runs the gamut between these perspectives and outside of them to divorce, the ultimate result of a poor or mismatched contract.

For project management in general and software project management in particular, the core arguments in Agile via #NoEstimates are an implicit evil because they undermine the essential relationships between the parties.  This is done through specialized jargon that is designed to obfuscate, the contingent nature of the obligation underlying its principles, and the lack of clear reasoning that forms the basis for its rebellion against planning, estimating, and accountability.  Rather than fostering an atmosphere of trust, it is an attempt for software developers to tip the balance in the project and contract management relationship in their favor, particularly in cases of external customer relationships.  This condition undermines trust and reinforces the most common software project dysfunction, such as the loss of requirements discipline, shifting scope, rubber baselines, and cost overruns.  In other words, for software projects, just more of the same.

Note: Grammatical corrections were made from the original.