Family Affair — Part III — Private Monopsony, Monopoly, and the Disaccumulation of Capital

It’s always good to be ahead of the power curve.  I see that the eminent Paul Krugman had an editorial in the New York Times about the very issues that I’ve dealt with in this blog, his example in this case being Amazon.  This is just one of many articles that have been raised about the monopsony power as a result of the Hatchette controversy.  In The New Republic Franklin Foer also addresses this issue at length in the article “Amazon Must Be Stopped.”  In my last post on this topic I discussed public monopsony, an area in which I have a great deal of expertise.  But those of us in the information world that are not Microsoft, Oracle, Google, or one of the other giants also live in the world of private monopsony.

For those or you late to these musings (or skipped the last ones), this line of inquiry when my colleague Mark Phillips made the statement at a recent conference that, while economic prospects for the average citizen are bad, that the best system that can be devised is one based on free market competition, misquoting Churchill.  The underlying premise of the statement, of course, is that this is the system that we currently inhabit, and that it is the most efficient way to distribute resources.  There also is usually an ideological component involved regarding some variation of free market fundamentalism and the concept that the free market is somehow separate from and superior to the government issuing the currency under which the system operates.

My counter to the assertions found in that compound statement is to prove that the economic system that we inhabit is not a perfectly competitive one, that there are large swaths of the market that are dysfunctional and that have given rise to monopoly, oligopoly, and monopsony power.  In addition, the ideological belief–which is very recent–that the roots of private economic activity is one that had arisen almost spontaneously with government being a separate component that can only be an imposition, is also false, given that nation-states and unions of nation states (as in the case of the European Union) are the issuers of sovereign currency, and so choose through their institutions the amount of freedom, regulation, and competition that their economies foster.  Thus, the economic system that we inhabit is the result of political action and public policy.

The effects of the distortions of monopoly and oligopoly power in the so-called private sector is all around us.  But when one peels back the onion we can see clearly the interrelationships between the private and public sectors.

For example, patent monopolies in the pharmaceutical industry allow for prices to be set, not based on the marginal value of the drug that would be set by a competitive market, but based on the impulse for profit maximization.  A recent example in the press lately–and critiqued by economist Dean Baker–has concerned the hepatitis-C drug Sovaldi, which goes for $84,000 a treatment, compared to markets in which the drug has not been granted a patent monopoly, where the price is about $900 a treatment.  Monopoly power, in the words of Baker, impose 10,000 percent tariff on those who must live under that system.  This was one of the defects in a system that I wrote about in my blog posts regarding tournaments and games of failure, though in pharmaceuticals the comparison seems to be more in line with gambling and lotteries.  The financial risks of investors, who often provide funds based on the slimmest thread of a good idea and talent, are willing to put great sums of money at risk in order to strike it rich and realize many times their initial investment.  The distorting incentives on this system are well documented: companies tend to focus on those medications and drugs with the greatest potential financial rate of return guaranteed by the patent monopoly system*, drug trials that downplay the risks and side-effects of the medications, and the price of medications is placed at so high a level as to eliminate them from all but the richest members of society since few private drug insurance plans will authorize such treatments given the cost–at least not without a Herculean effort on the part of individual patients.

We can also see the monopoly power at work first hand with the present lawsuits between Apple and Samsung regarding the smartphone market.  For many years (until very recently) the U.S. patent office took a permissive stand in allowing technology firms to essentially patent the look and feel of a technology, as well as features that could be developed and delivered by any number of means.  The legal system, probably more technologically challenged than other areas of society, has been inconsistent in determining how to deal with these claims.  The fact finder in many cases has been juries, who are not familiar with the nuances of the technology.  One need not make a stretch to pick out practical analogies of these decisions.  If applied to automobiles, for example, the many cases that have enforced these patent monopolies would have restricted windshield wipers to the first company that delivered the feature.  Oil filters, fuel filters, fuel injection, etc. would all have been restricted to one maker.

The stakes are high not only for these two giant technology companies but also for consumers.  They have already used their respective monopoly power established by their sovereign governments to pretty effectively ensure that the barriers to entry in the smartphone market are quite high.  Now they are unleashing these same forces on one another.  In the end, the manufacturing costs of the iPhone 6–which is produced by slave labor under the capitalist variant of Leninist China–are certainly much lower than the $500 and more that they demand (along with the anti-competitive practice of requiring a cellular agreement with one of their approved partners).  The tariff that consumers pay for the actual cost of production and maintenance on smartphones is significant.  This is not remedied by the oft-heard response to “simply not buy a smartphone,” since it shifts responsibility for the establishment of the public policy that allows this practice to flourish, to individuals who are comparatively powerless against the organized power of lobbyists who influenced public representatives to make these laws and institute the policy.

The fight over IP and patent (as well as net neutrality) are important for the future of technological innovation.  Given the monopsony power of companies that also exert monopoly power in particular industries, manufacturers are at risk of being squeezed in cases where prices are artificially reduced through the asymmetrical relationship between large buyers and relatively small sellers.  Central planning, regardless of whether it is exerted by a government or a large corporation, is dysfunctional.  When those same corporations seek to not only exert monopoly and monopsony power, but also to control information and technology, they seek to control all aspects of an economic activity not unlike the trusts of the time of the Robber Barons.  Amazon and Walmart are but two of the poster children of this situation.

The saving grace of late has been technological “disruption,” but this term has been misused to also apply to rent-seeking behavior.  I am not referring only to the kind of public policy rent-seeking that Amazon achieves when it avoids paying local taxes that apply to its competitors, or that Walmart achieves when it shifts its substandard pay and abusive employee policies to local, state, and federal public assistance agencies.  I am also referring to the latest controversies regarding AirBnB, Lyft, and Uber, which use loopholes in dealing with technology to sidestep health and safety laws in order to gain entry into a market.

Technological disruption, instead, is a specific phenomenon, based on the principle that the organic barriers to entry in a market are significantly reduced due to the introduction of technology.  The issue over the control of and access to information and innovation is specifically targeted at this phenomenon.  Large companies aggressively work to keep out new entries and to hinder innovations except those that they can control, conspiring against the public good.

The reason for why these battles are lining up resides in the modern phenomena known as disaccumulation of capital, which was first identified by social scientist Martin J. Sklar.  What this means is that the accumulation of capital, which is the time it takes to reproduce the existing material conditions of civilization, began declining in the 1920s.  As James Livingston points out in the same linked article in The Nation, “economic growth no longer required net additions either to the capital stock or the labor force….for the first time in history, human beings could increase the output of goods without increasing the essential inputs of capital and labor—they were released from the iron grip of economic necessity.”

For most of the history of civilization, the initial struggle of economics has been the ability for social organization to provide sufficient food, clothing, shelter, and medical care to people.  The conflicts between competing systems has been centered on their ability to most efficiently achieve these purposes without sacrificing individual liberty, autonomy, and dignity.  The technical solution for these goals has largely been achieved, but the efficient distribution of these essential elements of human existence has not been solved.  With the introduction of more efficient methods of information processing as well as production (digital printing is just the start), we are at the point where the process of less capital in the aggregate being required to produce the necessities and other artifacts of civilization is accelerating exponentially.

Concepts like full employment will increasingly become meaningless, because the same relationship of labor input to production that we came to expect in the recent past has changed within our own lifetimes.  Very small companies, particularly in technology, can have and have had a large impact.  In more than one market, even technology companies are re-learning the lesson of the “mythical man-month.”  Thus, the challenge in our time is to rethink the choices we have made and are making in terms of incentives and distribution that maximizes human flourishing.  But I will leave that larger question to another blog post.

For the purposes of this post focused on technology and project management, these developments call for a new microeconomics.  The seminal paper that identified this need early on was by Brad DeLong and Michael Froomkin in 1997 entitled “The Next Economy.”  While some of the real life examples they give from our perspective today provides a stroll down the digital memory-lane,  their main conclusions are relevant in how information differs from physical goods.  These are:

a.  Information is non-rivalrous.  That is, one person consuming information does not preclude someone else from consuming that information.  That is, information that is produced can be economically reproduced to operate in other environments at little to no marginal cost.  What they are talking about here is application software and the labor involved in producing a version of it.

b.  Information without exterior barriers is non-exclusive.  That is, if information is known it is almost impossible for others to know it.  For example, Einstein was the first to observe the mathematics of relativity but now every undergraduate physics student is expected to fully understand the theory.

c.  Information is not transparent.  That is, oftentimes in order to determine whether a piece of software will achieve its intended purpose, effort and resources must be invested to learn it and, oftentimes, apply it if initially only in a pilot program.

The attack coming from monopsony power is directed at the first characteristic of information.  The attack coming from monopoly power is currently directed at the second.    Doing so undermines both competition and innovation.  The first by denying the ability of small technology companies to capitalize sufficiently to develop the infrastructure necessary to become sustainable.  Oftentimes this reduces a market to one dominant supplier.  The second by restricting the application of new technologies and lessons learned based on the past.  The nature of information asymmetry is a problem for the third aspect of information, since oftentimes bad actors are economically rewarded at the expense of high quality performers as first identified in the automobile industry in George Akerlof’s paper “The Market for Lemons” (paywall).

The strategy of some entrepreneurs in small companies in reaction to these pressures has been to either sell out and be absorbed by the giants, or to sell out to private equity firms that “add value” by combining companies in lieu of organic growth, loading them down with debt from non-sustainable structuring, and selling off the new entity or its parts.  The track record for the sustainability of the applications involved in these transactions (and the satisfaction of customers) is a poor one.

One of the few places where competition still survives is among small to medium sized technology companies.  In order for these companies (and the project managers in them) to survive independently requires not only an understanding of the principles elucidated by DeLong and Froomkin.  Information also shares several tendencies with other technological innovation, but in ways that are unique to it, in improving efficiency and productivity; and in reducing the input of labor and capital.

The key is in understanding how to articulate value, how to identify opportunities for disruption, and to understand the nature of the markets in which one operates.  One’s behavior will be different if the market is diverse and vibrant, with many prospective buyers and diverse needs, as opposed to one dominated by one or a few buyers.  In the end it comes down to understanding the pain of the customer and having the agility and flexibility to solve that pain in areas where larger companies are weak or complacent.

 

*Where is that Ebola vaccine–which mainly would have benefited the citizens of poor African countries and our own members of the health services and armed forces–that would have averted public panic today?

Desolation Row — The Stagnation of IT Spending

Andrew McAfee has an interesting blog post on David Autor’s Jackson Hole Conference analysis of IT spending.  What it shows, I think, is the downward pressure on software pricing that is now driving the market.  This is a trend that those of us in the business are experiencing across the board.  The only disappointing aspects of the charts is that they are measures of private investment in IT and don’t address public expenditures.  Given frozen budgets and austerity it would be interesting to see if the same trend holds true on the public end, which I suspect may show a more dramatic level of under-investment in technology.  While I mostly agree with McAfee over Autor’s concerns, I believe that the factors related to technical stagnation that I raised in last week’s post are reinforced by the charts.  There seems to be some technological retrenchment going on that is solely focused on reducing overhead of existing capabilities to appease financial types.  This is reflected in record profits during a time of stagnant employment and employee incomes.  I think that the only hope for investment in technological innovation is going to have to come from the public sector, but the politics still seem to be aligned against it for some time to come.  Perhaps if we subjected financial managers, lawyers, doctors, pharmaceutical companies, entertainment, and insurance companies to the same kind of international competition that manufacturing and technology have been exposed to though “free-trade” agreements and the abandonment of patent monopolies, then perhaps we could have the equivalent of “affording” projects similar to the moon program and ARPANET again.

My Generation — Baby Boom Economics, Demographics, and Technological Stagnation

“You promised me Mars colonies, instead I got Facebook.” — MIT Technology Review cover over photo of Buzz Aldrin

“As a boy I was promised flying cars, instead I got 140 characters.”  — attributed to Marc Maron and others

I have been in a series of meetings over the last couple of weeks with colleagues describing the state of the technology industry and the markets it serves.  What seems to be a generally held view is that both the industry and the markets for software and technology are experiencing a hardening of the arteries and a resistance to change not seen since the first waves of digitization in the 1980s.

It is not as if this observation has not been noted by others.  Tyler Cowen at George Mason University noted the trend of technological stagnation in the eBook The Great Stagnation: How America Ate All the Low-Hanging Fruit of Modern History, Got Sick, and Will(Eventually) Feel BetterCowen’s thesis is not only to point out that innovation has slowed since the late 19th century, but that it has slowed a lot, where we have been slow to exploit “low-hanging fruit.”  I have to say that I am not entirely convinced by some of the data, which is anything but reliable in demonstrating causation in the long term trends.  Still, his observations of technological stagnation seem to be on the mark.  His concern, of course, is also directed to technology’s affect on employment, pointing out that, while making some individuals very rich, the effect of recent technological innovation doesn’t result in much employment

Cowen published his work in 2011, when the country was still in the early grip of the slow recovery from the Great Recession, and many seized on Cowen’s thesis as an opportunity for excuse-mongering and looking for deeper causes than the most obvious ones: government shutdowns, wage freezes, reductions in government R&D that is essential to private sector risk handling, and an austerian fiscal policy (with sequestration) in the face of weak demand created by the loss of $8 trillion in housing wealth that translated into a consumption gap of $1.2 trillion in 2014 dollars

Among the excuses that were manufactured is the meme that is still making the rounds about jobs mismatch due to a skills gap.  But, as economist Dean Baker has pointed out again and again, basic economics dictates that the scarcity of a skill manifests itself in higher wages and salaries–a reality not supported by the data for any major job categories.  Unemployment stood at 4.4 percent in May 2007 prior to the Great Recession.  The previous low between recession and expansion was the 3.9 percent rate in December 2000, yet we are to believe that suddenly in the 4 years since the start of one of the largest bubble crashes and resulting economic and financial crisis, that people no longer have the skills need to be employed (or suddenly are more lazy or shiftless).  The data do not cohere.

In my own industry and specialty there are niches for skills that are hard to come by and these people are paid handsomely, but the pressure among government contracting officers across the board has been to drive salaries down–a general trend seen across the country and pushed by a small economic elite and therein, I think lies the answer more than some long-term trend tying patents to “innovation.”  The effect of this downward push is to deny the federal government–the people’s government–from being able to access the high skills personnel needed to make it both more effective and responsive.  Combined with austerity policies there is a race to the bottom in terms of both skills and compensation.

What we are viewing, I think, that is behind our current technological stagnation is a reaction to the hits in housing wealth, in real wealth and savings, in employment, and in the downward pressure on compensation.  Absent active government fiscal policy as the backstop of last resort, there are no other places to make up for $1.2 trillion in lost consumption.  Combine this with the excesses of the patent and IP systems that create monopolies and stifle competition, particularly under the Copyright Term Extension Act and the recent Leahy-Smith America Invents Act.  Both of these acts have combined to undermine the position of small inventors and companies, encouraging the need for large budgets to anticipate patent and IP infringement litigation, and raising the barriers to entry for new technological improvements.

No doubt exacerbating this condition is the Baby Boom.  Since university economists don’t seem to mind horning in on my specialty (as noted in a recent post commenting on the unreliability of data mining by econometrics),  I don’t mind commenting on theirs–and what has always surprised me is how Baby Boom Economics never seems to play a role in understanding trends, nor as predictors of future developments in macroeconomic modeling.  Wages and salaries, even given Cowen’s low-hanging fruit, have not kept pace with productivity gains (which probably explains a lot of wealth concentration) since the late 1970s–a time that coincides with the Baby Boomers entering the workforce in droves.  A large part of this condition has been a direct consequence of government policies–through so-called ‘free trade” agreements–that have exposed U.S. workers in industrial and mid-level jobs to international competition from low-paying economies.

The Baby Boom, given an underperforming economy, saw not only their wages and salaries lag, but also saw their wealth and savings disappear with the Great Recession, when corporate mergers and acquisitions weren’t stealing their negotiated defined benefit plans, which they received in lieu of increases in compensation.  This has created a large contingent of surplus labor.  The size of the long-term unemployed, though falling, is still large compared to historical averages, is indicative of this condition.

With attempts to privatize Social Security and Medicare, workers now find themselves squeezed and under a great deal of economic anxiety.  On the ground I see this anxiety even at the senior executive level.  The workforce is increasingly getting older as people hang on for a few more years, perpetuating older ways of doing things. Even when there is a changeover, oftentimes the substitute manager did not receive the amount of mentoring and professional development expected in more functional times.  In both cases people are risk-averse, feeling that there is less room for error than there was in the past.

This does not an innovative economic environment make.

People who I had known as risk takers in their earlier years now favor the status quo and a quiet glide path to a secure post-employment life.  Politics and voting behavior also follows this culture of lowered expectations, which further perpetuates the race to the bottom.  In high tech this condition favors the perpetuation of older technologies, at least until economics dictates a change.

But it is in this last observation that there is hope for an answer, which does confirm that this is but a temporary condition.  For under the radar there are economies upon economies in computing power and the ability to handle larger amounts of data with exponential improvements in handling complexity.  Collaboration of small inventors and companies in developing synergy between compatible technologies can overcome the tyranny of the large monopolies, though the costs and risks are high.

As the established technologies continue to support the status quo–and postpone needed overhauls of code mostly written 10 to 20 years ago (which is equivalent to 20 to 40 software generations) their task, despite the immense amount of talent and money, is comparable to a Great Leap Forward–and those of you who are historically literate know how those efforts turned out.  Some will survive but there will be monumental–and surprising–falls from grace.

Thus the technology industry in many of its more sedentary niches are due for a great deal of disruption.  The key for small entrepreneurial companies and thought leaders is to be there before the tipping point.  But keep working the politics too.

Ch-ch Changes — Software Implementations and Organizational Process Improvement

Dave Gordon at The Practicing IT Project Manager lists a number of factors that define IT project success.  Among these is “Organizational change management efforts were sufficient to meet adoption goals.”  This is an issue that I am grappling with now on many fronts.

The initial question that comes to mind is which comes first–the need for organizational improvement or the transformation that comes results as a result of the introduction of new technology?  “Why does this matter?” one may ask.  The answer is that it defines how things are perceived by those that are being affected (or victimized) by the new technology.  This will then translate into various behaviors.  (Note that I did not say that “Perception is reality.”  For the reason why please consult the Devil’s Phraseology.)

This is important because the groundwork laid (or not laid) for the change that is to come will then translate into sub-factors (accepting Dave’s taxonomy of factors for success) that will have a large impact on the project, and whether it is defined as a success.  In getting something done the most overriding priority is not just “Gettin’ ‘Er Done.”  The manner in which our projects, particularly in IT, are executed and the technology introduced and implemented will determine the success of a number of major factors that contribute to overall project success.

Much has been written lately about “disruptive” change, and that can be a useful analogy when applied to new technologies that transform a market by providing something that is cheaper, better, and faster (with more functionality) than the market norm.  I am driving that type of change in my own target markets.  But that is in a competitive environment.  Judgement–and good judgement–requires that we not inflict this cultural approach on the customer.

The key, I think, is bringing back a concept and approach that seems to have been lost in the shuffle: systems analysis and engineering that works hand-in-hand with the deployment of the technological improvement.  There was a reason for asking for the technology in the first place, whether it be improved communications, improved productivity, or qualitative factors.  Going in willy-nilly with a new technology that provides unexpected benefits–even if those benefits are both useful and will improve the work process–can often be greeted with fear, sabotage, and obstruction.

When those of us who work with digital systems encounter someone challenged by the new introduction of technology or fear that “robots are taking our jobs,” our reaction is often an eye-roll, treating these individuals as modern Luddites.  But that is a dangerous stereotype.  Our industry is rife with stories of individuals who fall into this category.  Many of them are our most experienced middle managers and specialists who predate the technology being introduced.  How long does it take to develop the expertise to fill these positions?  What is the cost to the organization if their corporate knowledge and expertise is lost?  Given that they have probably experienced multiple reorganizations and technology improvements, their skepticism is probably warranted.

I am not speaking of the exception–the individual who would be opposed to any change.  Dave gives a head nod to the CHAOS report, but we also know that we come upon these reactions often enough to be documented from a variety of sources.  So how to we handle these?

There are two approaches.  One is to rely upon the resources and management of the acquiring organization to properly prepare the organization for the change to come, and to handle the job of determining the expected end state of the processes, and the personnel implications that are anticipated.  Another is for the technology provider to offer this service.

From my own direct experience, what I see is a lack of systems analysis expertise that is designed to work hand-in-hand with the technology being introduced.  For example, systems analysis is a skill that is all but gone in government agencies and large companies, which rely more and more on outsourcing for IT support.  Oftentimes the IT services consultant has its own agenda, which oftentimes conflicts with the goals of both the manager acquiring the technology and the technology provider.  Few outsourced IT services contracts anticipate that the consultant must act as an enthusiastic–as opposed to tepid (at best) willing–partner in these efforts.  Some agencies lately have tasked the outsourced IT consultant to act as honest broker to choose the technology, mindless of the strategic partnering and informal relationships that will result in a conflict of interest.

Thus, technology providers must be mindful of their target markets and design solutions to meet the typical process improvement requirements of the industry.  In order to do this the individuals involved must have a unique set of skills that combines a knowledge of the goals of the market actors, their processes, and how the technology will improve those processes.  Given this expertise, technology providers must then prepare the organizational environment to set expectations and to advance the vision of the end state–and to ensure that the customer accepts that end state.  It is then up to the customer’s management, once the terms of expectations and end-state have been agreed, to effectively communicate them to those personnel affected, and to do so in a way to eliminate fear and to generate enthusiasm that will ensure that the change is embraced and not resisted.

Better Knock-Knock-Knock on Wood — The Essential Need for Better Schedule-Cost Integration

Back in early to mid-1990s, when NSFNET was making the transition to the modern internet, I was just finishing up my second assignment as an IT project manager and transitioning to a full-blown Program Executive Office (PEO) Business Manager and CIO at a major Naval Systems Command.  The expanded potential of a more open internet was on everyone’s mind and, on the positive side, on how barriers to previously stove-piped data could be broken down in order to drive optimization of the use of that data (after processing it into useable intelligence).  The next step was then to use that information, which was opened to a larger audience that previously was excluded from it, and to juxtapose and integrate it with other essential data (processed into intelligence) to provide insights not previously realized.

Here we are almost 20 years later and I am disappointed to see in practice that the old barriers to information optimization still exist in many places where technology should have long ago broken this mindset.  Recently I have discussed cases at conferences and among PM professionals where the Performance Management Baseline (PMB), that is, the plan that is used to measure financial value of the work performed, is constructed separately from and without reference to the Integrated Master Schedule (IMS) until well after the fact.  This is a challenge to common sense.

Project management is based on the translation of a contract specification into a plan to build something.  The basic steps after many years of professional development are so tried and true that it should be rote at this point:  Integrated Master Plan (IMP) –> Integrated Master Schedule (IMS) with Schedule Risk Assessment (SRA) –> Resource assignments with negotiated rates –> Develop work packages, link to financials, and roll-up of WBS –> Performance Management Baseline (PMB).  The arrows represent the relationships between the elements.  Feel free to adjust semantics and add additional items to the process such as a technical performance baseline, testing and evaluation plans, systems descriptions to ensure traceability, milestone tracking, etc.  But the basic elements of project planning and execution pretty much remain the same–that’s all there is folks.  The complexity and time spent to go through the steps varies based on the complexity of the scope being undertaken.  For a long-term project involving billions or millions of dollars the interrelationships and supporting documentation is quite involved, for short-term efforts the process may be in mental process of the person doing the job.  But in the end, regardless of terminology, these are the basic elements of PM.

When one breaks this cycle and decides to build each of the elements independently from the other it is akin to building a bridge in sections without using an overarching plan.  Result:  it’s not going to meet in the center.  One can argue that it is perfectly fine to build the PMB concurrent with the IMS if the former is informed by the latter.  But in practice I find that this is rarely the case.  So what we have, then, is a case where a bridge is imperfectly matched when the two sections meet in the middle requiring constant readjustment and realignment.  Furthermore, the manner in which the schedule activities are aligned with the budget vary from project to project, even within the same organization.  So not only do we not use a common plan in building our notional bridge, we decide to avoid standardization of bolts and connectors too, just to make it that more interesting.

The last defense in this sub-optimized environment is: well, if we are adjusting it every month through the project team what difference does it make?  Isn’t this integration nonetheless?  Response #1:  No.  Response #2:  THIS-IS-THE-CHALLENGE-THAT-DIGITAL-SYSTEMS-ARE-DESIGNED-TO-OVERCOME.  The reason why this is not integration is because it simultaneously ignores the lessons learned in the SRA and prevents insights gained through optimization.  If our planning documents are contingent on a month-to-month basis then the performance measured against them is of little value and always open to question, and not just on the margins.  Furthermore, utilization of valuable project management personnel on performing what is essentially clerical work in today’s environment is indefensible.  If there are economic incentives for doing this it is time for project stakeholders and policymakers to end them.

It is time to break down the artificial barriers that define cost and schedule analysts.  Either you know project and program management or you don’t.  There is no magic wall between the two disciplines, given that one cannot exist without the other.  Furthermore, more standardization, not less, is called for.  For anyone who has tried to decipher schedules where smiley-faces, and non-standard and multiple structures are in use in the same schedule, which defy reference to a cost control account, it is clear that both the consulting and project management communities are failing to instill professionalism.

Otherwise, as in my title, it’s like knocking on wood.

Take Me Out to the Ballgame — Tournaments and Games of Failure

“Baseball teaches us, or has taught most of us, how to deal with failure. We learn at a very young age that failure is the norm in baseball and, precisely because we have failed, we hold in high regard those who fail less often – those who hit safely in one out of three chances and become star players. I also find it fascinating that baseball, alone in sport, considers errors to be part of the game, part of it’s rigorous truth.” — Fay Vincent, former Commissioner of Baseball (1989-1992)

“Baseball is a game of inches.”  — Branch Rickey, Quote Magazine, July 31, 1966

I have been a baseball fan just about as long as I have been able to talk.  My father played the game and tried out for both what were the New York Giants and Yankees–and was a pretty well known local hero in Weehawken back in the 1930s and 1940s.  I did not have my father’s athletic talents–a four letter man in high school–but I was good at hitting a baseball from the time he put a bat in my hands and so I played–and was sought after–into my college years.  Still, like many Americans who for one reason or another could not or did not pursue the game, I live vicariously through the players on the field.  We hold those who fail less in the game in high regard.  Some of them succeed for many years and are ensconced in the Hall of Fame.

Others experienced fleeting success.  Anyone who watches ESPN’s or the Yes Channel’s classic games, particularly those from the various World Series, can see this reality in play.  What if Bill Buckner in 1986 hadn’t missed that ball?  What if Bobby Richardson had not been in perfect position to catch what would have been a game and series winning liner by Willie McCovey in 1962?  Would Brooklyn have every won a series if Amoros hadn’t caught Berra’s drive down the left field line in 1955?  The Texas Rangers might have their first World Series ring if not for a plethora of errors, both mental and physical, in the sixth game of the 2011 Series.  The list can go on and it takes watching just a few of these games to realize that luck plays a big part in who is the victor.

There are other games of failure that we deal with in life, though oftentimes we don’t recognize them as such.  In economics these are called “tournaments,” and much like their early Medieval predecessor (as opposed to the stylized late Medieval and Renaissance games), the stakes are high.  In pondering the sorry state of my favorite team–the New York Yankees–as I watched seemingly minor errors and failures cascade into a humiliating loss, I came across a blog post by Brad DeLong, distinguished professor of economics at U.C. Berkeley, entitled “Over at Project Syndicate/Equitable Growth: What Do We Deserve Anyway?”  Dr. DeLong makes the very valid point, verified not only by anecdotal experience but years of economic research, that most human efforts, particularly economic ones, fail, and that the key determinants aren’t always–or do not seem in most cases–to be due to lack of talent, hard work, dedication, or any of the attributes that successful people like to credit for their success.

Instead, much of the economy, which in its present form is largely based on a tournament-like structure, allows only a small percentage of entrants to extract their marginal product from society in the form of extremely high levels of compensation.  The fact that these examples exist is much like a lottery, as the following quote from Dr. DeLong illustrates.

“If you win the lottery–and if the big prize in the lottery that is given to you is there in order to induce others to overestimate their chances and purchase lottery tickets and so enrich the lottery runner–do you “deserve” your winnings? It is not a win-win-win transaction: you are happy being paid, the lottery promoter is happy paying you, but the others who purchase lottery tickets are not happy–or, perhaps, would not be happy in their best selves if they understood what their chances really were and how your winning is finely-tuned to mislead them, for they do voluntarily buy the lottery tickets and you do have a choice.”  — Brad DeLong, Professor of Economics, U.C. Berkeley

So even though participants have a “choice,” it is one that is based on an intricately established system based on self-delusion.  It was about this time that I came across the excellent HBO Series “Silicon Valley.”  The tournament aspect of the software industry is apparent in the conferences and competitions for both customers and investors in which I have participated over the years.  In the end, luck and timing seem to play the biggest role in success (apart from having sufficient capital and reliable business partners).

I hope this parody ends my colleagues’ (and future techies’) claims to making the claim to “revolutionize” and “make the world a better place” through software.

Doin’ It Right (On Scheduling)

Been reading quite a bit about assessments and analysis lately–14 Point Assessment, project health assessments, etc.  Assessment and analysis is a necessary role of oversight, particularly in contracting efforts involving public funds on the part of agencies tasked with that role.  From a project management perspective, however, the only beneficiaries seem to be one-off best-of-breed software tool manufacturers and some consultants who specialize in this sort of thing.

Assessment of the quality of our antecedent project artifacts is a necessary evil only because the defects in those plans undermine our ability to determine what is really happening in the project.  It is but a band-aid–a temporary patch for a more systemic problem, for we must ask ourselves:  how was a schedule that breached several elements of the 14-Point Assessment constructed in the first place?  This is, of course, a rhetorical question and one well known by most, if not all, of my colleagues.

That many of our systems are designed to catch relatively basic defects after the fact and to construct lists to correct them–time and resources that are rarely planned for by project teams–is, in fact, a quantitative indicator of a systemic problem.  There is no doubt, as Glen Alleman said in his most recent post on Herding Cats, that we need to establish intermediate closed systems to assess performance in each of the discrete segments of our plan.  This is Systems Analysis 101.  But these feedback loops are rarely budgeted.  When they are budgeted, as in EVM, it is usually viewed as a regulatory cost that requires documentation that proves the overriding elucidation of benefits.  These benefits would be more generally accepted if the indicators were more clearly tied to cause-and-effect and provided in a timely enough manner for course correction.  But a great deal of effort is still expended in fixing the underlying artifacts on which our analysis depends, well after the fact.  This makes project performance analysis that much harder.

Making corrections to course based on your taking fixes when entering port is an acceptable practice.  Using a wrong or erroneous chart is incompetence.  Thus, there is an alternative way to view this problem and that is to accept no defects in the construction of governing project management planning documents.

In discussing this issue, I have been reminded by colleagues that doing this very thing was the stated purpose of the Integrated Baseline Review when it was first deployed by the Navy in the 1990s.  I served as a member of that first IBR Steering Group when the process was being tested and deployed.  In fact, the initial recommendations of the group was that the IBR was not to be treated or imposed as an external inspection–which is much as it is being applied today–but rather an opportunity to identify risks and opportunities within the program team to ensure that the essential project artifacts: the master plan (IMP), integrated master schedule (IMS), performance management baseline (PMB), risk analysis systems, technical performance plan and milestones, etc., which would eventually inform both the Systems Description (SD) and CAM notebooks were properly constructed.  In addition, the IBR was intended to be reconvened as necessary over the life of a project or program when changes necessitated adjustments to the processes that affected program performance and expectations.

So what is the solution?  I would posit that it involves several changes.

First, is that the artificial dichotomy of the cost and schedule analyst disciplines needs to end, both across the industry and through the professional organizations that support them.  That there is both a College of Performance Management and a multiplicity of schedule-focused organizations–separate and, in many cases, in competition with one another.  It made a great deal of sense to create specialties when these disciplines were still evolving and involved specialized knowledge that caused very high barriers to entry.  But the advancement of information systems have not only broken down these barriers to understanding and utilizing the methods of these specialties, the cross-fertilization of disciplines have provided us insights into the systems we are tasked to manage in ways that seemed impossible just five or six years ago: two to three full software generations over that time.

Second, is that we have allowed well entrenched technology to constrain our view of the possible for too long.  We obviously know that we have come a long way from physically posting time-phased plans on a magnetic board.  But we have also come a long way from being constrained by software technology that limits us to hard-coded applications that do only one thing–whether that one thing be EVM, schedule analysis, technical performance, or based on fixing errors (to finish the analogy) well after we have decided to sail.  All too often the last condition puts us in the shoals.