Mo’Better Risk — Tournaments and Games of Failure Part II

My last post discussed economic tournaments and games of failure in how they describe the success and failure of companies, with a comic example for IT start-up companies.  Glen Alleman at his Herding Cats blog has a more serious response in handily rebutting those who believe that #NoEstimates, Lean, Agile, and other cult-like fads can overcome the bottom line, that is, apply a method to reduce inherent risk and drive success.  As Glen writes:

“It’s about the money. It’s always about the money. Many want it to be about them or their colleagues, or the work environment, or the learning opportunities, or the self actualization.” — Glen Alleman, Herding Cats

Perfectly good products and companies fail all the time.  Oftentimes the best products fail to win the market, or do so only fleetingly.  Just think of the roles of the dead (or walking dead) over the years:  Novell, WordPerfect, Visicalc, Harvard Graphics; the list can go on and on.  Thus, one point that I would deviate from Glen is that it is not always EBITDA.  If that were true then both Facebook and Amazon would not be around today.  We see tremendous payouts to companies with promising technologies acquired for outrageous sums of money, though they have yet to make a profit.  But for every one of these there are many others that see the light of day for a moment and then flicker out of existence

So what is going on and how does this inform our knowledge of project management?  For the measure of our success is time and money, in most cases.  Obviously not all cases.  I’ve given two cases of success that appeared to be failure in previous posts to this blog: the M1A1 Tank and the ACA.  The reason why these “failures” were misdiagnosed was that the agreed measure(s) of success were incorrect.  Knowing this difference, where, and how it applies is important.

So how do tournaments and games of failure play a role in project management?  I submit that the lesson learned from these observations is that we see certain types of behaviors that are encouraged that tend to “bake” certain risks into our projects.  In high tech we know that there will be a thousand failures for every success, but it is important to keep the players playing–at least it is in the interest of the acquiring organization to do so, and is in the public interest in many cases as well.  We also know that most IT projects by most measures–both contracted out and organic–tend to realize a high rate of failure.  But if you win an important contract or secure an important project, the rewards can be significant.

The behaviors that are reinforced in this scenario on the part of the competing organization is to underestimate the cost and time involved in the effort; that is, so-called “bid to win.”  On the acquiring organization’s part, contracting officers lately have been all too happy to award contracts they know to be too low (and normally out of the competitive range) even though they realize it to be significantly below the independent estimate.  Thus “buying in” provides a significant risk that is hard to overcome.

Other behaviors that we see given the project ecosystem are the bias toward optimism and requirements instability.

In the first case, bias toward optimism, we often hear project and program managers dismiss bad news because it is “looking in the rear view mirror.”  We are “exploring,” we are told, and so the end state will not be dictated by history.  We often hear a version of this meme in cases where those in power wish to avoid accountability.  “Mistakes were made” and “we are focused on the future” are attempts to change the subject and avoid the reckoning that will come.  In most cases, however, particularly in project management, the motivations are not dishonest but, instead, sociological and psychological.  People who tend to build things–engineers in general, software coders, designers, etc.–tend to be an optimistic lot.  In very few cases will you find one of them who will refuse to take on a challenge.  How many cases have we presented a challenge to someone with these traits and heard the refrain:  “I can do that.”?  This form of self-delusion can be both an asset and a risk.  Who but an optimist would take on any technically challenging project?  But this is also the trait that will keep people working to the bitter end in a failure that places the entire enterprise at risk.

I have already spent some bits in previous posts regarding the instability of requirements, but this is part and parcel of the traits that we see within this framework.  Our end users determine that given how things are going we really need additional functionality, features, or improvements prior to the product roll out.  Our technical personnel will determine that for “just a bit more effort” they can achieve a higher level of performance or add capabilities at marginal or tradeoff cost.  In many cases, given the realization that the acquisition was a buy-in, project and program managers allow great latitude in accepting as a change an item that was assumed to be in the original scope.

There is a point where one or more of these factors is “baked in” into the course that the project will take.  We can delude ourselves into believing that we can change the course of the trajectory of the system through the application of methods: Agile, Lean, Six Sigma, PMBOK, etc. but, in the end, if we exhaust our resources without a road map on how to do this we will fail.  Our systems must be powerful and discrete enough to note the trend that is “baked in” due to factors in the structure and architecture of the effort being undertaken.  This is the core risk that must be managed in any undertaking.  A good example that applies to a complex topic like Global Warming was recently illustrated by Neil deGrasse Tyson in the series Cosmos:

In this example Dr. Tyson is climate and the dog is the weather.  But in our own analogy Dr. Tyson can be the trajectory of the system with the dog representing the “noise” of periodic indicators and activity around the effort.  We often spend a lot of time and effort (which I would argue is largely unproductive) on influencing these transient conditions in simpler systems rather than on the core inertia of the system itself.  That is where the risk lies. Thus, not all indicators are the same.  Some are measuring transient anomalies that have nothing to do with changing the core direction of the system, others are more valuable.  These latter indicators are the ones that we need to cultivate and develop, and they reside in an initial measurement of the inherent risk of the system largely based on its architecture that is antecedent to the start of the work.

This is not to say that we can do nothing about the trajectory.  A simpler system can be influenced more easily.  We cannot recover the effort already expended–which is why even historical indicators are important.  It is because they inform our future expectations and, if we pay attention to them, they keep us grounded in reality.  Even in the case of Global Warming we can change, though gradually, what will be a disastrous result if we allow things to continue on their present course.  In a deterministic universe we can influence the outcomes based on the contingent probabilities presented to us over time.  Thus, we will know if we have handled the core risk of the system by focusing on these better indicators as the effort progresses.  This will affect its trajectory.

Of course, a more direct way of modifying these risks is to make systemic adjustments.  Do we really need a tournament-based system as it exists and is the waste inherent in accepting so much failure really necessary?  What would that alternative look like?

Mo’Better Risk — Tournaments and Games of Failure Part II

My last post discussed economic tournaments and games of failure in how they describe the success and failure of companies, with an comic example for IT start-up companies.  Glen Alleman at his Herding Cats blog has a more serious response in handily rebutting those who believe that #NoEstimates, Lean, Agile, and other cult-like fads can overcome the bottom line, that is, apply a method to reduce inherent risk and drive success.  As Glen writes:

“It’s about the money. It’s always about the money. Many want it to be about them or their colleagues, or the work environment, or the learning opportunities, or the self actualization.” — Glen Alleman, Herding Cats

Perfectly good products and companies fail all the time.  Oftentimes the best products fail to win the market, or do so only fleetingly.  Just think of the roles of the dead (or walking dead) over the years:  Novell, WordPerfect, Visicalc, Harvard Graphics; the list can go on and on.  Thus, one point that I would deviate from Glen is that it is not always EBITDA.  If that were true then both Facebook and Amazon would not be around today.  We see tremendous payouts to companies with promising technologies acquired for outrageous sums of money, though they have yet to make a profit.  But for every one of these there are many others that see the light of day for a moment and then flicker out of existence

So what is going on and how does this inform our knowledge of project management?  For the measure of our success is time and money, in most cases.  Obviously not all cases.  I’ve given two cases of success that appeared to be failure in previous posts to this blog: the M1A1 Tank and the ACA.  The reason why these “failures” were misdiagnosed was that the agreed measure(s) of success were incorrect.  Knowing this difference, where, and how it applies is important.

So how do tournaments and games of failure play a role in project management?  I submit that the lesson learned from these observations is that we see certain types of behaviors that are encouraged that tend to “bake” certain risks into our projects.  In high tech we know that there will be a thousand failures for every success, but it is important to keep the players playing–at least it is in the interest of the acquiring organization to do so, and is in the public interest in many cases as well.  We also know that most IT projects by most measures–both contracted out and organic–tend to realize a high rate of failure.  But if you win an important contract or secure an important project, the rewards can be significant.

The behaviors that are reinforced in this scenario on the part of the competing organization is to underestimate the cost and time involved in the effort; that is, so-called “bid to win.”  On the acquiring organization’s part, contracting officers lately have been all too happy to award contracts they know to be too low (and normally out of the competitive range) even though they realize it to be significantly below the independent estimate.  Thus “buying in” provides a significant risk that is hard to overcome.

Other behaviors that we see given the project ecosystem are the bias toward optimism and requirements instability.

In the first case, bias toward optimism, we often hear project and program managers dismiss bad news because it is “looking in the rear view mirror.”  We are “exploring,” we are told, and so the end state will not be dictated by history.  We often hear a version of this meme in cases where those in power wish to avoid accountability.  “Mistakes were made” and “we are focused on the future” are attempts to change the subject and avoid the reckoning that will come.  In most cases, however, particularly in project management, the motivations are not dishonest but, instead, sociological and psychological.  People who tend to build things–engineers in general, software coders, designers, etc.–tend to be an optimistic lot.  In very few cases will you find one of them who will refuse to take on a challenge.  How many cases have we presented a challenge to someone with these traits and heard the refrain:  “I can do that.”?  This form of self-delusion can be both an asset and a risk.  Who but an optimist would take on any technically challenging project?  But this is also the trait that will keep people working to the bitter end in a failure that places the entire enterprise at risk.

I have already spent some bits in previous posts regarding the instability of requirements, but this is part and parcel of the traits that we see within this framework.  Our end users determine that given how things are going we really need additional functionality, features, or improvements prior to the product roll out.  Our technical personnel will determine that for “just a bit more effort” they can achieve a higher level of performance or add capabilities at marginal or tradeoff cost.  In many cases, given the realization that the acquisition was a buy-in, project and program managers allow great latitude in accepting as a change an item that was assumed to be in the original scope.

There is a point where one or more of these factors is “baked in” into the course that the project will take.  We can delude ourselves into believing that we can change the course of the trajectory of the system through the application of methods: Agile, Lean, Six Sigma, PMBOK, etc. but, in the end, if we exhaust our resources without a road map on how to do this we will fail.  Our systems must be powerful and discrete enough to note the trend that is “baked in” due to factors in the structure and architecture of the effort being undertaken.  This is the core risk that must be managed in any undertaking.  A good example that applies to a complex topic like Global Warming was recently illustrated by Neil deGrasse Tyson in the series Cosmos:

In this example Dr. Tyson is climate and the dog is the weather.  But in our own analogy Dr. Tyson can be the trajectory of the system with the dog representing the “noise” of periodic indicators and activity around the effort.  We often spend a lot of time and effort (which I would argue is largely unproductive) on influencing these transient conditions in simpler systems rather than on the core inertia of the system itself.  That is where the risk lies. Thus, not all indicators are the same.  Some are measuring transient anomalies that have nothing to do with changing the core direction of the system, others are more valuable.  These latter indicators are the ones that we need to cultivate and develop, and they reside in an initial measurement of the inherent risk of the system largely based on its architecture that is antecedent to the start of the work.

This is not to say that we can do nothing about the trajectory.  A simpler system can be influenced more easily.  We cannot recover the effort already expended–which is why even historical indicators are important.  It is because they inform our future expectations and, if we pay attention to them, they keep us grounded in reality.  Even in the case of Global Warming we can change, though gradually, what will be a disastrous result if we allow things to continue on their present course.  In a deterministic universe we can influence the outcomes based on the contingent probabilities presented to us over time.  Thus, we will know if we have handled the core risk of the system by focusing on these better indicators as the effort progresses.  This will affect its trajectory.

Of course, a more direct way of modifying these risks is to make systemic adjustments.  Do we really need a tournament-based system as it exists and is the waste inherent in accepting so much failure really necessary?  What would that alternative look like?

Standing in the Shadow of (Deming) — How does Agile Stack Up? — Part One

I’ve read a few posts across the web over time in which Agile Cult proponents have tried to tie the Agile Manifesto as being on a continuum from Deming.  Given the #NoEstimates drive you would expect someone to cherry pick a portion of item 3 of his Fourteen Points of Management, that is, “Cease dependence on inspection to achieve quality,” omitting the remainder of the point: “Eliminate the need for inspection on a mass basis by building quality into the product in the first place.”  There are even a few that have attempted to appropriate Deming’s work by redefining the meaning of his systems approach and philosophy.  (A classic symptom of ideologues and cults, but more on that in a later post).

W. Edwards Deming was from all accounts a brilliant statistician.  In 1927 he met physicist and statistician Walter A. Shewhart of Bell Telephone Laboratories who is generally accepted as the originator of statistical quality control.  Shewhart’s work focused on processes and the related technical tool of the control chart.  One of his most important observations was his data presentation rules, which are:

1.  Data have no meaning apart from their context and,

2.  Data contain both signal and noise.  To be able to extract information, one must separate the signal from the noise within the data.

These concepts are extremely important in avoiding the fallacy of reification, in which to regard something abstract as a material or concrete thing.  This is a concept that I have come back to at various times in this blog, particularly as it relates to statistical performance measures such as EVM and risk measurement.  Shewhart’s work no doubt was influenced by the work of astronomer, statistician, mathematician, and philosopher Charles Sanders Peirce in this regard.

Another important feature of Shewharts approach was the development of the Shewhart or Deming (Plan-Do-Study-Act) cycle.  This is:

  • Plan: identify what can be improved and what change is needed
  • Do: implement the design change
  • Study: measure and analyze the process or outcome
  • Act: if the results are not as hoped for

Deming’s insight and contribution came when he realized that Shewhart’s methods not only applied to equipment performance but also to both manufacturing and management practices.  He further refined his methods by applying and training U.S. war industry personnel in Statistical Process Control (SPC) during the Second World War.  After the war he served under General MacArthur during the U.S. occupation and rebuilding of Japan as a census consultant to the Japanese government.  He then made his mark in assisting Japanese industry in applying his statistical and management methods to their rebuilding efforts.

To illustrate how different Deming was from Agile and #NoEstimates, it is useful to understand that the purpose of his methods, rooted in empirical methods (the accepted definition and not the appropriated ideological Agile definition), were focused on improving quality and reducing costs.  The formula for this approach is summed up as follows:

Quality = Results of Work Efforts/Total Costs

We have seen the validation of this formula with each generation of technological development, particularly in software and hardware development.  But in order to gauge success in order to insert quality into the process we must estimate, measure, inspect, and validate.  These are elements of the feedback loop that are essential in establishing a quality improvement process.

As Dave Gordon at the Practicing IT Project Manager blog stated in his post entitled Received Knowledge, Fanaticism, and Software Consultants, in the end, for all of the attempts at special pleading and to misdirect the assessment of risk in the system through Agile methods, it still all comes down to coding–and coding quality can be measured.  “But this doesn’t drive out fear!” can be heard in the reply by mediocre Agile coders, cherry picking Deming.  No, not if you’re not creating quality code in this particular example.  Human Resources Management (HRM) for good and bad is a leadership and management responsibility, and nowhere in a full reading of Deming does he eliminate these decisions given the denominator of cost in the equation of quality.

This is one aspect of IT coding that is slightly different from our other systems: software design and coding is as much as an art as a skill.  This is what we mean by elegance when we see well written code–it coheres as a simple systemic solution to the problem at hand that maximizes performance by leveraging the internal logic of the language being used and, as such, avoids defects not only in its current version but also is written in such a way that its internal logic, simplicity, and cohesion will allow for incremental improvement, and the avoidance and elimination of defects in future builds.

My first real life experience with this concept came during my first assignment as a project manager when I was a young Navy Lieutenant in San Diego.  I had just learned how to code software and had performed at the top of my class in pursuing a degree in software engineering.  This apparently qualified me in the eyes of my superiors to take over a failing program (read to mean: above cost and behind schedule) tasked with building a standard procurement system for the Navy (later the joint procurement system).  In reviewing the reasons for failure it became apparent that the army of systems analysts, technical consultants, and software developers were pulling the effort into conflicting directions.  My first act was to narrow the field to one lead developer and one very good coder, letting the rest to find other employment.  I then pared from there in building a cohesive team (within the realistic span of control) focused on quality and project success after we had defined what success would look like.  And, yes, we used estimates.  We recovered the program in six months until it was transferred to D.C. and placed under a more senior officer–the occasional price of success.

Neither Deming’s propositions nor Agile software development methodology are received knowledge or wisdom (see my last post on Carl Sagan), though the latter makes special pleading along those lines.  Both are open to questioning, validation, and testing.  This principle is implicit in Deming’s own System of Profound Knowledge regarding what he described as the Theory of Knowledge–an approach resting firmly within the principles of the scientific method.  Manifestos are beliefs, opinions, and assertions absent proof and, as such, many of the propositions of Agile are worlds away from Deming.

Note:  Minor edits made to correct grammatical errors in the original.

Gimme All Your (Money) — Agile and the Intrinsic Evil of #NoEstimates

Over the years I’ve served as a project, acquisition, and contracts specialist in both public service and private industry.  Most of those assignments involved the introduction of digital technology, from the earliest days of the introduction of what were called mini-computers, through the introduction of the PC, to the various digital devices, robotics, and artificial intelligence that we use today.

A joke I often encountered over the years was that if you asked a software programmer what his solution could do the response all too often was: “what would you like it to do?”  The point of the joke, which has more than a grain of truth in it, is that programmers do not live in (or would prefer not live in) the world of finite resources and often fall into the trap of excessive optimism.  That this is backed by empirical evidence has been discussed previously in this blog, where over 90% of software projects in both private industry and public organizations either fail outright or fail to meet expectations.  This pattern of failure is pervasive regardless of the method of development used: waterfall, spiral, or–the latest rage–Agile.

Agile is a break from the principles of scientific management upon which previous methodologies were based.  As such, it learns no lessons from the past, much as a narcissist rejects the contributions of others.  It is not that all of the ideas that were espoused in the original Agile manifesto in 2001–or those since–are necessarily invalid or may not be good ideas in modifying and improving previous practices, it is that they are based on a declaration without attribution to evidence.  As such, Agile has all of the markings of a cult: an ideology of management that brooks no deviation and which is resistant to evidence.  Faced with contrary evidence the reaction is to double down and push the envelope further.

The latest example of this penchant is by Neil Killick in his post “Beyond #NoEstimates — Why the traditional software contract must die.”  It is worth a read but, in the end, the thrust of the post is to state that contracts enforce accountability and the followers of the Agile Cult don’t want that because, well, there is all of that planning, scheduling, budgeting, and reporting that gets in the way of delivering “value.”  The flaw in the prescriptions of the Cult, particularly its latest #NoEstimates offshoot, has been adequately and thoughtfully documented by many well-respected practitioners of the art of project management such as that by Dave Gordon and others and I will not revisit them here.  Instead, I will focus on Mr. Killick’s article in question.

First, Mr. Killick argues that “value” cannot be derived from the plethora of “traditional” software contracts.  His ignorance of contracting is most clear here for he doesn’t define his terms.  What is a “traditional” software contract?  As a former contract negotiator and contracting officer, I find nothing called a “traditional” software contract in the contracting lexicon.  There are firm fixed price contracts, cost plus type contracts, time and materials/labor hour contracts, etc. but no “traditional” contracts.  For developmental efforts some variation of the cost-plus contract is usually appropriate, but the contract type and structure must be such that it provides sufficient incentives for “value” that exceeds the basic requirements of the customer, the type of effort, the risk involved, and the resource and time constraints involved in the effort.  The scope can be defined by specific line item specifications or a performance specification.  Thus, contrary to the impression left in the post, quite a bit of freedom is allowed within a contract and R&D projects under various contract types have been succeeding for quite a long time.  In addition, the use of the term “traditional” seems to have a certain dog-whistle quality about it for the Cult with its use going back to the original manifesto.  This then, at least to those recognizing the whistle, is a loaded word that leads to an argument that assumes its conclusion: that such contracts lead to poor results, which is not true (assuming a firm definition of “traditional” could be provided) and for which there is sufficient evidence.

Second, another of Mr. Killick’s assumptions is that “traditional” contracts (whatever those are) start from a position of distrust.  In his words: “Working agreements that embrace“Here’s what you must deliver or we’ll sue you”.(sic).”  Once again Mr. Killick demonstrates his ignorance.  The comments and discussion at the end of his post reinforce a narrative that it’s all the lawyers.  I do have a number of friends who are attorneys and my contempt for the frequent excesses of the legal profession is well known to them.  But there is a difference between a contract specialist and a lawyer and it is best summed up in a concept and an anecdote.

The concept is the basic description of a contract which, at its most simple definition, is a promise for a promise.  Usually this takes the form of a promise to perform in return for a promise to pay, since the promise must be sufficient and involve consideration of value in order to establish a contract.  It is not a promise to pay based on a contingent lack of a promise to perform unless, of course, the software developer is willing to allow the contingent nature of the promise to work both ways.  That is, we’ll allow you to expend effort to try to satisfy our needs and we’ll pay you if it is of value at a price that we think your product is worth.  It is not a contract in that case but, at least, both parties know their risks.  The promise for a promise–the rise of the concept of the contract–is in many ways the basis for civilization.  Its rise coincided with and reinforced civil society, and established ground rules for the conduct of human affairs that replaced the contingent nature of relationships between individuals.  Without such ground rules, trust is not possible.

The anecdote explains the aim of a contract and why it is not a lawyer’s game.  This aim was explained to me by one of my closest friends, who is an attorney.  He said: “the difference between you, a contract negotiator, and me, an attorney is that when I come out of the room I know I have done my job when all of the parties are unhappy.  You know you have done your job when all of the parties come out of the room happy.”  Thus, Mr. Killick gets contracting backwards.  The basis of this insightful perspective is based on the different roles of an attorney and a contract negotiator.  An attorney is trained and educated to vehemently defend the interests of its client.  The attorney realizes that he or she engages in a zero-sum game.  The clash of attorneys on opposing sides will usually result in an outcome where neither side feels fully satisfied.  The aim of the contract negotiator (at least the most successful and effective ones) is to determine the goals and acceptable terms for both parties and to find the common ground so that the relationship will proceed under an atmosphere of trust and cooperation.

The most common contract in which many parties engage is the marriage contract.  Such an arrangement can be viewed as an unfortunate obligation that hinders creativeness and acceptance of change, one established by lawyers to enforce the terms of the agreement or else.  But many find that it is a basis for trust and stability, where growth and change are fostered rather than hindered.  In real life, of course, this is a false dilemma.  For most people the arrangement runs the gamut between these perspectives and outside of them to divorce, the ultimate result of a poor or mismatched contract.

For project management in general and software project management in particular, the core arguments in Agile via #NoEstimates are an implicit evil because they undermine the essential relationships between the parties.  This is done through specialized jargon that is designed to obfuscate, the contingent nature of the obligation underlying its principles, and the lack of clear reasoning that forms the basis for its rebellion against planning, estimating, and accountability.  Rather than fostering an atmosphere of trust, it is an attempt for software developers to tip the balance in the project and contract management relationship in their favor, particularly in cases of external customer relationships.  This condition undermines trust and reinforces the most common software project dysfunction, such as the loss of requirements discipline, shifting scope, rubber baselines, and cost overruns.  In other words, for software projects, just more of the same.

Note: Grammatical corrections were made from the original.