The Monster Mash — Zombie Ideas in Project and Information Management

Just completed a number of meetings and discussions among thought leaders in the area of complex project management this week, and I was struck by a number of zombie ideas in project management, especially related to information, that just won’t die.  The use of the term zombie idea is usually attributed to the Nobel economist Paul Krugman from his excellent and highly engaging (as well as brutally honest) posts at the New York Times, but for those not familiar, a zombie idea is “a proposition that has been thoroughly refuted by analysis and evidence, and should be dead — but won’t stay dead because it serves a political purpose, appeals to prejudices, or both.”

The point is that to a techie–or anyone engaged in intellectual honesty–is that they are often posed in the form of question begging, that is, they advance invalid assumptions in the asking or the telling.  Most often they take the form of the assertive half of the same coin derived from “when did you stop beating your wife?”-type questions.  I’ve compiled a few of these for this post and it is important to understand the purpose for doing so.  It is not to take individuals to task or to bash non-techies–who have a valid reason to ask basic questions based on what they’ve heard–but propositions put forth by people who should know better based on their technical expertise or experience.  Furthermore, knowing and understanding technology and its economics is really essential today to anyone operating in the project management domain.

So here are a few zombies that seem to be most common:

a.  More data equals greater expense.  I dealt with this issue in more depth in a previous post, but it’s worth repeating here:  “When we inform Moore’s Law by Landauer’s Principle, that is, that the energy expended in each additional bit of computation becomes vanishingly small, it becomes clear that the difference in cost in transferring a MB of data as opposed to a KB of data is virtually TSTM (“too small to measure”).”  The real reason why we continue to deal with this assertion is both political in nature and also based in social human interaction.  People hate oversight and they hate to be micromanaged, especially to the point of disrupting the work at hand.  We see behavior, especially in regulatory and contractual relationships, where the reporting entity plays the game of “hiding the button.”  This behavior is usually justified by pointing to examples of dysfunction, particularly on the part of the checker, where information submissions lead to the abuse of discretion in oversight and management.  Needless to say, while such abuse does occur, no one has yet to point quantitatively to data (as opposed to anecdotally) that show how often this happens.

I would hazard to guess that virtually anyone with some experience has had to work for a bad boss; where every detail and nuance is microscopically interrogated to the point where it becomes hard to make progress on the task at hand.  Such individuals, who have been advanced under the Peter principle must, no doubt, be removed from such a position.  But this often happens in any organization, whether it be in private enterprise–especially in places where there is no oversight, check-and-balances, means of appeal, or accountability–or government–and is irrelevant to the assertion.  The expense item being described is bad management, not excess data.  Thus, such assertions are based on the antecedent assumption of bad management, which goes hand-in-hand with…

b. More information is the enemy of efficiency.  This is the other half of the economic argument to more data equals greater expense.  And I failed to mention that where the conflict has been engaged over these issues, some unjustifiable figure is given for the additional data that is certainly not supported by the high tech economics cited above.  Another aspect of both of these perspectives also comes from the conception of non-techies that more data and information is equivalent to pre-digital effort, especially in conceptualizing the work that often went into human-readable reports.  This is really an argument that supports the assertion that it is time to shift the focus from fixed report formatting functionality in software based on limited data to complete data, which can be formatted and processed as necessary.  If the right and sufficient information is provided up-front, then additional questions and interrogatories that demand supplemental data and information–with the attendant multiplication of data streams and data islands that truly do add cost and drive inefficiency–are at least significantly reduced, if not eliminated.

c.  Data size adds unmanageable complexity.  This was actually put forth by another software professional–and no doubt the non-techies in the room would have nodded their heads in agreement (particularly given a and b above), if opposing expert opinion hadn’t been offered.  Without putting too fine a point on it, a techie saying this to an open forum is equivalent to whining that your job is too hard.  This will get you ridiculed at development forums, where you will be viewed as an insufferable dilettante.  Digitized technology for well over 40 years has been operating under the phenomenon of Moore’s Law.  Under this law, computational and media storage capability doubles at least every two years under the original definition, though that equation has accelerated to somewhere between 12 and 24 months.  Thus, what was considered big data, say, in 1997 when NASA first coined the term, is not considered big data today.  No doubt, what is considered big data this year will not be considered big data two years from now.  Thus, the term itself is relative and may very well become archaic.  The manner in which data is managed–its rationalization and normalization–is important in successfully translating disparate data sources, but the assertion that big is scary is simply fear mongering because you don’t have the goods.

d.  Big data requires more expensive and sophisticated approaches.  This flows from item c above as well and is often self-serving.  Scare stories abound, often using big numbers which sound scary.  All data that has a common use across domains has to be rationalized at some point if they come from disparate sources, and there are a number of efficient software techniques for accomplishing this.  Furthermore, support for agnostic APIs and common industry standards, such as the UN/CEFACT XML, take much of the rationalization and normalization work out of a manual process.  Yet I have consistently seen suboptimized methods being put forth that essentially require an army of data scientists and coders to essentially engage in brute force data mining–a methodology that has been around for almost 30 years: except that now it carries with it the moniker of big data.  Needless to say this approach is probably the most expensive and slowest out there.  But then, the motivation for its use by IT shops is usually based in rice bowl and resource politics.  This is flimflam–an attempt to revive an old zombie under a new name.  When faced with such assertions, see Moore’s Law and keep on looking for the right answer.  It’s out there.

e.  Performance management and assessment is an unnecessary “regulatory” expense.  This one keeps coming up as part of a broader political agenda beyond just project management.  I’ve discussed in detail the issues of materiality and prescriptiveness in regulatory regimes here and here, and have addressed the obvious legitmacy of organizations to establish one in fiduciary, contractual, and governmental environments.

My usual response to the assertion of expense is to simply point to the unregulated derivatives market largely responsible for the financial collapse, and the resulting deep economic recession that followed once the housing bubble burst.  (And, aside from the cost of human suffering and joblessness, the expenses related to TARP).  Thus we know that the deregulation of banking had gone so well.  Even after the Band-Aid of Dodd-Frank the situation probably requires a bit more vigor, and should include the ratings agencies as well as the real estate market.  But here is the fact of the matter: such expenses cannot be monetized as additive because “regulatory” expenses usually represent an assessment of the day-to-day documentation, systems, and procedures required when performing normal business operations and due diligence in management.  I attended an excellent presentation last week where the speaker, tasked with finding unnecessary regulatory expenses, admitted as much.

Thus, what we are really talking about is an expense that is an essential prerequisite to entry in a particular vertical, especially where monopsony exists as a result of government action.  Moral hazard, then, is defined by the inherent risk assumed by contract type, and should be assessed on those terms.  Given the current trend is to raise thresholds, the question is going to be–in the government sphere–whether public opinion will be as forgiving in a situation where moral hazard assumes $100M in risk when things head south, as they often do with regularity in project management.  The way to reduce that moral hazard is through sufficiency of submitted data.  Thus, we return to my points in a and b above.

f.  Effective project assessment can be performed using high level data.  It appears that this view has its origins in both self-interest and a type of anti-intellectualism/anti-empiricism.

In the former case, the bias is usually based on the limitations of either individuals or the selected technology in providing sufficient information.  In the latter case, the argument results in a tautology that reinforces the fallacy that absence of evidence proves evidence of absence.  Here is how I have heard the justification for this assertion: identifying emerging trends in a project does not require that either trending or lower level data be assessed.  The projects in question are very high dollar value, complex projects.

Yes, I have represented this view correctly.  Aside from questions of competency, I think the fallacy here is self-evident.  Study after study (sadly not all online, but performed within OSD at PARCA and IDA over the last three years) have demonstrated that high level data averages out and masks indicators of risk manifestation, which could have been detected looking at data at the appropriate level, which is the intersection of work and assigned resources.  In plain language, this requires integration of the cost and schedule systems, with risk first being noted through consecutive schedule performance slips.  When combined with technical performance measures, and effective identification of qualitative and quantitative risk tied to schedule activities, the early warning is two to three months (and sometime more) before the risk is reflected in the cost measurement systems.  You’re not going to do this with an Excel spreadsheet.  But, for reference, see my post  Excel is not a Project Management Solution.

It’s time to kill the zombies with facts–and to behead them once and for all.

Highway to the (Neutral) Zone — Net Neutrality and More on Information Economics

Net Neutrality was very much in the news this week.  First, the President came out in favor of Net Neutrality on Monday.  Then later in the week the chair of the FCC, Tom Wheeler, who looked like someone caught with his hands in the cookie jar, vacillated on how the agency sees the concept of Net Neutrality.  Some members of Congress have taken exception.

For those of us in the software business, the decision of the FCC will determine whether the internet which was created by public investment, will be taken over and dominated by a few large corporations.  The issue isn’t a hard one to understand.  Internet service providers, which is an area dominated by large telecommunications and cable oligopolies, would like to take lay claim to the internet’s bandwidth and charge for levels of access and internet speed.  A small business, a startup, any small enterprise would be stuck in a slower internet, while those with the financial resources would be able to push their products and services into internet “fast lanes” by paying fees for the privilege and therefore be able to have an advantage in terms of visibility, raising the barriers of entry to would-be competitors, and to defend market share.  Conceivably, since these companies often provide their own products or are aligned with other large companies both vertically and horizontally, there would be little to stop a provider from controlling all aspects of the information that is available to consumers, teachers, citizens, researchers–virtually anyone who accesses the internet–which is virtually everyone today.  Those who claim that such use of power is unlikely because Comcast et al have committed themselves to the now defunct 2010 rules apparently haven’t read the fine print, are unfamiliar with recent economic history (such as Comcast’s throttling of BitTorrent in the early 2000s, Cox Cable’s blocking of some downloading, and other similar examples), or haven’t heard of Lord Acton.

When combined with attacks on public investments for community broadband (also known as public high speed internet) in cities and communities, we are seeing an orchestrated campaign by a few corporations to not only dictate the terms of the market, but also to control the market itself.  This is the classic definition of a corporate trust and monopoly.  It is interesting that those who constantly advocate for a free, competitive market are the first to move against them where they do exist.

Jeffrey Dorfman at Forbes–to pick just one example–falls into this category, seemingly twisting logic into pretzels to make his argument.  He addresses analogies when we only have to point out the conditions in the real world.  For example, I love the following statement:  “The key point that President Obama has missed along with all the rabid supporters of net neutrality is that ISPs and the companies that control the Internet backbone infrastructure that knits everything together do not have the power to pick winners and losers either. Consumers decide what products and services are successful because we adopt them. If an ISP blocks because of the bandwidth it requires, consumers who want Netflix will take their business elsewhere. If enough people do so, the ISP will have to change policies or go out of business.”  Hmmm.  So in large swaths of the United States where there is only one ISP, how will consumers choose Netflix or drive the ISP out of business?  What market mechanism or model applies to this scenario?  I cannot find in either Samuelson or Friedman (or Smith, Ricardo, Keynes, Classical or neo-Classical economics, etc.)–or a historical example for that matter–where a company exerting monopoly power has been driven out of business due to consumer preference for a product.  More to the point, if an ISP prevents a company like Netflix to provide its service over the internet backbone how would consumers know about it in the first place, especially if the monopoly substitutes its own equivalent service instead?

But Mr. Dorfman’s non-sequiturs get better.  He follows up with the following statement:  “As the former chief economist for the FCC, Thomas Hazlett, pointed out  this week in Time, Facebook, Instagram, Twitter, LinkedIn, (and many, many more success stories of innovation) all emerged without the benefit of net neutrality.”  Aside from committing the fallacy of argumentation from authority, he can’t get his facts right.

The internet as we know it really didn’t begin to come into existence and open up to commercial traffic until the late 1990s.  The FCC created the first voluntary net neutrality rules in 2004, but the internet was still largely open with many competing ISPs well into the new century, thus net neutrality was largely a de facto condition.  In 2008 the FCC auctioned wireless spectrum with tight rules ensuring net neutrality and followed this up with a broader set of requirements in 2010.  These 2010 rules did not apply to all ISPs because of restrictions by the courts, but functioned pretty well.  It wasn’t until 2014 that the 2010 rules were once again overturned by the courts.  Mr. Hazlett’s cited “point” then is factually inaccurate, since the companies he references did come into existence in an environment of de facto–partly voluntary and partly enforced–net neutrality.  What has changed is the use of the courts by corporations and revolving door lawyers like Mr. Hazlett to undermine that condition.  What Mr. Hazlett would like to do is shut the door to new companies succeeding under the same set of rules as those earlier ones.

So what “net neutrality” is about is addressing a problem that is supported by concrete examples where both the public interest and open market principles were violated when the fences came down.  In the scenario that Mr. Dorfman proposes to defend corporate power, consumers don’t get a vote, to use the canard by ideologues that consumers “vote” to begin with.  The market sets price.  Consumer preferences are shaped by other factors outside of the market, information being one of those factors.

As I noted in a previous blog post, research into the economics of information has revealed that it is a discipline with several unique characteristics, among these being that information is easily transferrable but, in order to determine its utility, requires some knowledge and investment of time.  Along with the insight of social scientist Martin Sklar that the capital investment required to replace the existing material conditions of civilization has been falling steadily, what we see happening is that there has been another explosion of technological innovation to disrupt capital intensive markets where information technologies substitute labor and processing.  And this is only the beginning.  A company need not be merely complacent to be overtaken–it just need to be a little less agile, a bit more inflexibly structured.

All of that can be undermined, however, if a group or organization is able to control the means of obtaining and disseminating information.  This is why non-democratic regimes in the Middle East and China go to great lengths to control the internet backbone.  Here in the U.S. Comcast has argued that it doesn’t want to undermine neutrality (with some important exceptions and contrary history, by the way); that it simply intends take a percentage of the take from what runs through the plumbing.  But, ignoring the contradictory facts of their history, their stated intent is  rent seeking behavior.  All arguments to the contrary Comcast–and the other ISPs and telecom giants–haven’t hesitated to use both the courts and government power to increase their market power, and then to leverage the financial power that comes with that new advantage to greater advantage.  Historical comparisons to the 19th century Robber Barons of the railroads is both accurate and instructive.  It’s an old playbook.

It will be interesting to see what the FCC does, given that Mr. Obama appointed a telecommunications lobbyist to run an agency formed to rein in those very industries.  The proponents of undermining net neutrality have co-opted the use of the term “innovation” so that it is meaningless unless you are a cable company or ISP that can find another fee for service scheme.  Apparently innovation is only important for those private companies who have the bucks.  Rarely, however, do those with the bucks want to see the next Buck Rogers pass them by–and that, my friends, is the crux of the issue.

Family Affair — Part III — Private Monopsony, Monopoly, and the Disaccumulation of Capital

It’s always good to be ahead of the power curve.  I see that the eminent Paul Krugman had an editorial in the New York Times about the very issues that I’ve dealt with in this blog, his example in this case being Amazon.  This is just one of many articles that have been raised about the monopsony power as a result of the Hatchette controversy.  In The New Republic Franklin Foer also addresses this issue at length in the article “Amazon Must Be Stopped.”  In my last post on this topic I discussed public monopsony, an area in which I have a great deal of expertise.  But those of us in the information world that are not Microsoft, Oracle, Google, or one of the other giants also live in the world of private monopsony.

For those or you late to these musings (or skipped the last ones), this line of inquiry when my colleague Mark Phillips made the statement at a recent conference that, while economic prospects for the average citizen are bad, that the best system that can be devised is one based on free market competition, misquoting Churchill.  The underlying premise of the statement, of course, is that this is the system that we currently inhabit, and that it is the most efficient way to distribute resources.  There also is usually an ideological component involved regarding some variation of free market fundamentalism and the concept that the free market is somehow separate from and superior to the government issuing the currency under which the system operates.

My counter to the assertions found in that compound statement is to prove that the economic system that we inhabit is not a perfectly competitive one, that there are large swaths of the market that are dysfunctional and that have given rise to monopoly, oligopoly, and monopsony power.  In addition, the ideological belief–which is very recent–that the roots of private economic activity is one that had arisen almost spontaneously with government being a separate component that can only be an imposition, is also false, given that nation-states and unions of nation states (as in the case of the European Union) are the issuers of sovereign currency, and so choose through their institutions the amount of freedom, regulation, and competition that their economies foster.  Thus, the economic system that we inhabit is the result of political action and public policy.

The effects of the distortions of monopoly and oligopoly power in the so-called private sector is all around us.  But when one peels back the onion we can see clearly the interrelationships between the private and public sectors.

For example, patent monopolies in the pharmaceutical industry allow for prices to be set, not based on the marginal value of the drug that would be set by a competitive market, but based on the impulse for profit maximization.  A recent example in the press lately–and critiqued by economist Dean Baker–has concerned the hepatitis-C drug Sovaldi, which goes for $84,000 a treatment, compared to markets in which the drug has not been granted a patent monopoly, where the price is about $900 a treatment.  Monopoly power, in the words of Baker, impose 10,000 percent tariff on those who must live under that system.  This was one of the defects in a system that I wrote about in my blog posts regarding tournaments and games of failure, though in pharmaceuticals the comparison seems to be more in line with gambling and lotteries.  The financial risks of investors, who often provide funds based on the slimmest thread of a good idea and talent, are willing to put great sums of money at risk in order to strike it rich and realize many times their initial investment.  The distorting incentives on this system are well documented: companies tend to focus on those medications and drugs with the greatest potential financial rate of return guaranteed by the patent monopoly system*, drug trials that downplay the risks and side-effects of the medications, and the price of medications is placed at so high a level as to eliminate them from all but the richest members of society since few private drug insurance plans will authorize such treatments given the cost–at least not without a Herculean effort on the part of individual patients.

We can also see the monopoly power at work first hand with the present lawsuits between Apple and Samsung regarding the smartphone market.  For many years (until very recently) the U.S. patent office took a permissive stand in allowing technology firms to essentially patent the look and feel of a technology, as well as features that could be developed and delivered by any number of means.  The legal system, probably more technologically challenged than other areas of society, has been inconsistent in determining how to deal with these claims.  The fact finder in many cases has been juries, who are not familiar with the nuances of the technology.  One need not make a stretch to pick out practical analogies of these decisions.  If applied to automobiles, for example, the many cases that have enforced these patent monopolies would have restricted windshield wipers to the first company that delivered the feature.  Oil filters, fuel filters, fuel injection, etc. would all have been restricted to one maker.

The stakes are high not only for these two giant technology companies but also for consumers.  They have already used their respective monopoly power established by their sovereign governments to pretty effectively ensure that the barriers to entry in the smartphone market are quite high.  Now they are unleashing these same forces on one another.  In the end, the manufacturing costs of the iPhone 6–which is produced by slave labor under the capitalist variant of Leninist China–are certainly much lower than the $500 and more that they demand (along with the anti-competitive practice of requiring a cellular agreement with one of their approved partners).  The tariff that consumers pay for the actual cost of production and maintenance on smartphones is significant.  This is not remedied by the oft-heard response to “simply not buy a smartphone,” since it shifts responsibility for the establishment of the public policy that allows this practice to flourish, to individuals who are comparatively powerless against the organized power of lobbyists who influenced public representatives to make these laws and institute the policy.

The fight over IP and patent (as well as net neutrality) are important for the future of technological innovation.  Given the monopsony power of companies that also exert monopoly power in particular industries, manufacturers are at risk of being squeezed in cases where prices are artificially reduced through the asymmetrical relationship between large buyers and relatively small sellers.  Central planning, regardless of whether it is exerted by a government or a large corporation, is dysfunctional.  When those same corporations seek to not only exert monopoly and monopsony power, but also to control information and technology, they seek to control all aspects of an economic activity not unlike the trusts of the time of the Robber Barons.  Amazon and Walmart are but two of the poster children of this situation.

The saving grace of late has been technological “disruption,” but this term has been misused to also apply to rent-seeking behavior.  I am not referring only to the kind of public policy rent-seeking that Amazon achieves when it avoids paying local taxes that apply to its competitors, or that Walmart achieves when it shifts its substandard pay and abusive employee policies to local, state, and federal public assistance agencies.  I am also referring to the latest controversies regarding AirBnB, Lyft, and Uber, which use loopholes in dealing with technology to sidestep health and safety laws in order to gain entry into a market.

Technological disruption, instead, is a specific phenomenon, based on the principle that the organic barriers to entry in a market are significantly reduced due to the introduction of technology.  The issue over the control of and access to information and innovation is specifically targeted at this phenomenon.  Large companies aggressively work to keep out new entries and to hinder innovations except those that they can control, conspiring against the public good.

The reason for why these battles are lining up resides in the modern phenomena known as disaccumulation of capital, which was first identified by social scientist Martin J. Sklar.  What this means is that the accumulation of capital, which is the time it takes to reproduce the existing material conditions of civilization, began declining in the 1920s.  As James Livingston points out in the same linked article in The Nation, “economic growth no longer required net additions either to the capital stock or the labor force….for the first time in history, human beings could increase the output of goods without increasing the essential inputs of capital and labor—they were released from the iron grip of economic necessity.”

For most of the history of civilization, the initial struggle of economics has been the ability for social organization to provide sufficient food, clothing, shelter, and medical care to people.  The conflicts between competing systems has been centered on their ability to most efficiently achieve these purposes without sacrificing individual liberty, autonomy, and dignity.  The technical solution for these goals has largely been achieved, but the efficient distribution of these essential elements of human existence has not been solved.  With the introduction of more efficient methods of information processing as well as production (digital printing is just the start), we are at the point where the process of less capital in the aggregate being required to produce the necessities and other artifacts of civilization is accelerating exponentially.

Concepts like full employment will increasingly become meaningless, because the same relationship of labor input to production that we came to expect in the recent past has changed within our own lifetimes.  Very small companies, particularly in technology, can have and have had a large impact.  In more than one market, even technology companies are re-learning the lesson of the “mythical man-month.”  Thus, the challenge in our time is to rethink the choices we have made and are making in terms of incentives and distribution that maximizes human flourishing.  But I will leave that larger question to another blog post.

For the purposes of this post focused on technology and project management, these developments call for a new microeconomics.  The seminal paper that identified this need early on was by Brad DeLong and Michael Froomkin in 1997 entitled “The Next Economy.”  While some of the real life examples they give from our perspective today provides a stroll down the digital memory-lane,  their main conclusions are relevant in how information differs from physical goods.  These are:

a.  Information is non-rivalrous.  That is, one person consuming information does not preclude someone else from consuming that information.  That is, information that is produced can be economically reproduced to operate in other environments at little to no marginal cost.  What they are talking about here is application software and the labor involved in producing a version of it.

b.  Information without exterior barriers is non-exclusive.  That is, if information is known it is almost impossible for others to know it.  For example, Einstein was the first to observe the mathematics of relativity but now every undergraduate physics student is expected to fully understand the theory.

c.  Information is not transparent.  That is, oftentimes in order to determine whether a piece of software will achieve its intended purpose, effort and resources must be invested to learn it and, oftentimes, apply it if initially only in a pilot program.

The attack coming from monopsony power is directed at the first characteristic of information.  The attack coming from monopoly power is currently directed at the second.    Doing so undermines both competition and innovation.  The first by denying the ability of small technology companies to capitalize sufficiently to develop the infrastructure necessary to become sustainable.  Oftentimes this reduces a market to one dominant supplier.  The second by restricting the application of new technologies and lessons learned based on the past.  The nature of information asymmetry is a problem for the third aspect of information, since oftentimes bad actors are economically rewarded at the expense of high quality performers as first identified in the automobile industry in George Akerlof’s paper “The Market for Lemons” (paywall).

The strategy of some entrepreneurs in small companies in reaction to these pressures has been to either sell out and be absorbed by the giants, or to sell out to private equity firms that “add value” by combining companies in lieu of organic growth, loading them down with debt from non-sustainable structuring, and selling off the new entity or its parts.  The track record for the sustainability of the applications involved in these transactions (and the satisfaction of customers) is a poor one.

One of the few places where competition still survives is among small to medium sized technology companies.  In order for these companies (and the project managers in them) to survive independently requires not only an understanding of the principles elucidated by DeLong and Froomkin.  Information also shares several tendencies with other technological innovation, but in ways that are unique to it, in improving efficiency and productivity; and in reducing the input of labor and capital.

The key is in understanding how to articulate value, how to identify opportunities for disruption, and to understand the nature of the markets in which one operates.  One’s behavior will be different if the market is diverse and vibrant, with many prospective buyers and diverse needs, as opposed to one dominated by one or a few buyers.  In the end it comes down to understanding the pain of the customer and having the agility and flexibility to solve that pain in areas where larger companies are weak or complacent.

 

*Where is that Ebola vaccine–which mainly would have benefited the citizens of poor African countries and our own members of the health services and armed forces–that would have averted public panic today?