New York Times Says Research and Development Is Hard…but maybe not

At least that is what a reader is led to believe by reading this article that appeared over the weekend.  For those of you who didn’t catch it, Alphabet, which formerly had an R&D shop under the old Google moniker known as Google X, does pure R&D.  According to the reporter, one Conor Doughtery, the problem, you see, is that R&D doesn’t always translate into a direct short-term profit.  He then makes this absurd statement:  “Building a research division is an old and often unsuccessful concept.”  He knows this because some professor at Arizona State University–that world-leading hotbed of innovation and high tech–told him so.  (Yes, there is sarcasm in that sentence).

Had Mr. Doughtery understood new technology, he would know that all technology companies are, at core, research organizations that sometimes make money in the form of net profits, just as someone once accurately described to me that Tesla is a battery company that also makes cars (and lately its showing).  But let’s return the howler of a statement about research divisions being unsuccessful, apply some, you know, facts and empiricist thought, and go from there.

The most obvious example of a research division is Bell Labs.  From the article one would think that Bell Labs is a dinosaur of the past, but no, it still exists as Nokia Bell Labs.  Bell Labs was created in 1925, but has its antecedents in both Western Electric and AT&T, but its true roots go back to 1880 when Alexander Graham Bell, after being awarded the Volta prize for the invention of the telephone, opened Volta Labs in Washington, D.C.  But it was in the 1920s that Bell Labs, “the Idea Factory” really hit its stride.  Its researchers improved telephone switching, sound transmission, and invented radio astronomy, the transistor, the laser, information theory (of which I’ve written about extensively and which directly impacts on computing and software), Unix, the languages C, C++.  Bell established the precedent that researchers kept and were compensated for use of their inventions and IP.  This goes well beyond the assertion in the article that Bell Labs largely made “contributions to basic, university-style research.”  I guess New York Times reporters, fact checkers, and editors don’t have access to the Google search engine or Wikipedia.

Between 1937 and 2014 seventeen of their researchers have been awarded the Nobel Prize or Turing Award.  Even those who never garnered an award like Claude Shannon, of the aforementioned information theory, is among a Who’s Who of researchers into high tech.  What they didn’t invent directly they augmented and facilitated to practical use, with a good deal of their input going into public R&D through consulting and other contracts with the Department of Defense and federal government.

The reason why Bell Labs didn’t continue as a research division of AT&T wasn’t due to some dictate of the market or investor dissatisfaction.  On the contrary, AT&T (Ma Bell) dominated its market, and Bell Labs ensured that it stayed far ahead of any possible entry.  This is why in 1984 the U.S. Justice Department reached a divestiture agreement for AT&T under antitrust laws to split off Bell Labs from its local carriers in order to promote competition.  Whether the divestiture agreement was a good deal for the American people and had positive economic effects is still a cause for debate, but it is likely that the plethora of choices in cell phone and other technologies that have emerged since that time would not have gone to market without that antitrust action.

Since 1984, Bell Labs continued its significant contributions to the high tech industry through AT&T Technologies which was spun off in 1996 as Lucent Technologies, which is probably why Mr. Doughtery didn’t recognize it.  A merger with Alcaltel and then acquisition by Nokia has provided it with its current moniker.  Bell Labs over that period continued to innovate and has contributed significantly to pushing the boundaries of broadband speed and the use of imaging technology in the medical field.

So what this shows is that, while not every bit of R&D leads directly to profit, especially in the short term, a mix of types of R&D do yield practical results.  Anyone who has worked in project management understands that R&D, by definition, represents the handling of risk.  Furthermore, the lessons learned and spin offs are hard to estimate in advance, though they may result in practical technologies in the short and medium term.

When one reads past the lede and the “research division is an old and often unsuccessful concept” gaffe, among others, what you find is that Google specifically wants this portion of the research division to come up with a series of what it calls a “moon shots”.  In techie lingo this is often called a unicorn, and from personal experience I am part of a company that recently was characterized as delivering a unicorn.  This is simply a shorthand term for producing a solution that is practical, groundbreaking, and shifts the dialogue of what is possible.  (Note that I’m avoiding the tech hipster term “disruption”).

Another significant fact that we find out about Google X is the following:

X employees avoid talking about money, but it is not a subject they can ignore. They face financial barriers that can shut down a project if it does not pan out as quickly as planned. And they have to meet various milestones before they can hire more people for their teams.

This sounds a lot like project and risk management.  But Google X goes a bit further.

Failure bonuses are also an example of how X, which was set up independent of Google from the outset, is a leading indicator of sorts for how the autonomous Alphabet could work. In Alphabet, employees who do not work for Mother Google are supposed to have their financial futures tied to their own company instead of Google’s search ads. At X, that means killing things before they become too expensive.

Note that the incentive here, given in terms of a real financial incentive to the team members, is to manage risk.  No doubt, there are no #NoEstimates cultists at Google.  Psychologically, providing an incentive to find failure no doubt defeats Groupthink and optimism selection bias.  Much of this sounds, particularly in the expectation of non-existential failure, amazingly along the lines of an article recently published on AITS.org by yours truly.

The delayed profitability of software and technology companies is commonplace.  The reason for this is that, at least to my thinking, any technology type worth their salt will continue to push the technology once they have their first version marked to market.  If you’re resting on your laurels then you’re no longer in the software technology business, you’re in the retail business and might as well be selling candy bars or any other consumer product.  What you’re not doing is being engaged in providing a solution that is essential to the target domain.  Practically what this means is that, in garnering value, net profitability is not necessary the measure of success, especially in the first years.

For example, such market leaders such as Box, Workday, and Salesforce have gone years without a net profit, though revenues and market share are significant.  Facebook did not turn a profit for five yearsAmazon took six years, and even those figures were questionable.  The competing need for any executive running a company is between value (the intrinsic value of IP, existing customer base, and potential customer base), and profit.  The job of the CEO is not just to stockholders, yet the article in its lede clearly is biased in that way.  The fiduciary and legal responsibility of the CEO is to the customers, the employees, the entity, and the stockholders–and not necessarily in that order.  This is thus a natural conflict in balancing these competing interests.

Overall, if one ignores the contributions of the reporter, the case of Google X is a fascinating one for its expectations and handling or risk in R&D-focused project management.  It takes value where it can and cuts its losses through incentives to find risk that can’t be handled.  An investor that lives in the real world should find this reassuring.  Perhaps these lessons on incentives can be applied elsewhere.

 

We Gotta Get Out of This Place — Are Our Contracting Systems Agile Enough?

The question in the title refers to agile in the “traditional” sense and not the big “A” appropriated sense.  But I’ll talk about big “A” Agile also.

It also refers to a number of discussions I have been engaged in recently among some of the leading practitioners in the program and project management community. Here are few data points:

a.  GAO and other oversight agencies have been critical of changing requirements over the life cycle of a project, particularly in DoD and other federal agencies, that contribute to cost growth.  The defense of these changes has been that many of them were necessary in order to meet new circumstances.  Okay, sounds fair enough.

But to my way of thinking, if the change(s) were necessary to keep the project from being obsolete upon deployment of the system, or were to correct an emergent threat that would have undermined project success and its rationale, then by all means we need to course correct.  But if the changes were not to address either of those scenarios, but simply to improve the system at more than marginal cost, then it was unnecessary.

How can I make such a broad statement and what is the alternative? we may ask.  My rationale is that the change or changes, if representing a new development involving significant funding, should stand on its own merits, since it is essentially a new project.

All of us who have been involved in complex projects have seen cases where, as a result of development (and quite often failure), that oftentimes we discover new methods and technologies within the present scope that garner an advantage not previously anticipated.  This doesn’t happen as often as we’d like but it does happen.  In my own survey and project in development of a methodology for incorporating technical performance into project cost, schedule and risk assessments, it was found that failing a test, for example, had value since it allowed engineers to determine pathways for not only achieving the technical objective but, oftentimes, exceeding the parameter.  We find that for x% more in investment as a result of the development, test, milestone review, etc. that we can improve the performance of some aspect of the system.  In that case, if the cost or effort is marginal then, the improvement is part of the core development process within the original scope.  Limited internal replanning may be necessary to incorporate the change but the remainder of the project can largely go along as planned.

Alternatively, however, inserting new effort in the form of changes to major subsystems involves major restructuring of the project.  This disrupts the business rhythm of the project, causing a cultural shift within the project team to socialize the change, and to incorporate the new work.  Change of this type not only causes what is essentially a reboot of the project, but also tends to add risk to the project and program.  This new risk will manifest itself as cost risk initially, but given risk handling, will also manifest itself into technical and schedule risk.

The result of this decision, driven solely by what may seem to be urgent operational considerations, is to undermine project and program timeliness since there is a financial impact to these decisions.  Thus, when you increase risk to a program the reaction of the budget holder is to provide an incentive to the program manager to manage risk more closely.  This oftentimes will invite what, in D.C. parlance, is called a budget mark, but to the rest of us is called a budget cut.  When socialized within the project, such cuts usually are then taken out of management reserve or non-mandatory activities that were put in place as contingencies to handle overall program risk at inception.  The mark is usually equal to the amount of internal risk caused by the requirements change.  Thus, adding risk is punished, not rewarded, because money is finite and must be applied to projects and programs that demonstrate that they can execute the scope against the plan and expend the funds provided to them.  So the total scope (and thus cost) of the project will increase, but the flexibility within the budget base will decrease since all of that money is now committed to handle risk.  Unanticipated risk, therefore, may not be effectively handled in the future.

At first the application of a budget mark in this case may seem counterintuitive, and when I first went through the budget hearing process it certainly did to me.  That is until one realizes that at each level the budget holder must demonstrate that the funds are being used for their intended purpose.  There can be no “banking” of money since each project and program must compete for the dollars available at any one time–it’s not the PM’s money, he or she has use of that money to provide the intended system.  Unfortunately, piggybacking significant changes (and constructive changes) to the original scope is common in project management.  Customers want what they want and business wants that business.  (More on this below).  As a result, the quid pro quo is: you want this new thing?  okay, but you will now have to manage risk based on the introduction of new requirements.  Risk handling, then, will most often lead to increased duration.  This can and often does result in a non-virtuous spiral in which requirements changes lead to cost growth and project risk, which lead to budget marks that restrict overall project flexibility, which tend to lead to additional duration.  A project under these circumstances finds itself either pushed to the point of not being deployed, or being deployed many years after the system needed to be in place, at much greater overall cost than originally anticipated.

As an alternative, by making improvements stand on their own merits a proper cost-benefit analysis can be completed to determine if the improvement is timely and how it measures up against the latest alternative technologies available.  It becomes its own project and not a parasite feeding off of the main effort.  This is known as the iterative approach and those in software development know it very well: you determine the problem that needs to be solved, figure out the features and approach that provides the 80% solution, and work to get it done.  Improvements can come after version 1.0–coding is not a welfare program for developers as the Agile Cult would have it.  The ramifications for project and program managers is apparent: they must not only be aware of the operational and technical aspects of their efforts, but also know the financial impacts of their decisions and take those into account.  Failure to do so is a recipe for self-inflicted disaster.

This leads us to the next point.

b.  In the last 20+ years major projects have found that the time from initial development to production has increased several times.  For example, the poster child for this phenomenon in the military services is the F35 Lightning II jet fighter, also known as the Joint Strike Fighter (JSF), which will continue to be in development at least through 2019 and perhaps into 2021.  From program inception in 2001 to Initial Operational Capability (IOC) it will be 15 years, at least, before the program is ready to deploy and go to production.  This scenario is being played out across the board in both government and industry for large projects of all types with few exceptions.  In particular, software projects tend to either fail or to meet their operational goals in the overwhelming majority of cases.  This would suggest that, aside from the typical issues of configuration control, project stability, and rubber baselining, (aside from the self-reinforcing cost growth culture of the Agile Cult) that there are larger underlying causes involved than simply contracting systems, though they are probably a contributing factor.

From a hardware perspective in terms of military strategy there may be a very good reason why it doesn’t matter that certain systems are not deployed immediately.  That reason is that, once deployed, they are expensive to maintain logistically.  Logistics of deployed systems will compete for dollars that could be better spent in developing–but not deploying–new technologies.  The answer, of course, is somewhere in between.  You can’t use that notional jet fighter when you needed it half a world away yesterday.

c.  Where we can see the effects on behavior from an acquisition systems perspective is in the comparison of independent estimates to what is eventually negotiated.  For example, one military service recently gave the example of a program in which the confidential independent estimate was $2.1 billion.  The successful commercial contractor team, let’s call them Team A, whose proposal was deemed technically acceptable, made an offer at $1.2 billion while the unsuccessful contractor team, Team B, offered near the independent estimate.  Months later, thanks to constructive changes, the eventual cost of the contract will be at or slightly above the independent estimate based on an apples-to-apples comparison of the scope.  Thus it is apparent that Team A bought into the contract.  Apparently, honesty in proposal pricing isn’t always the best policy.

I have often been asked what the rationale could be for a contractor to “buy-in” particularly for such large programs involving so much money.  The answer, of course, is “it depends.”  Team A could have the technological lead in the systems being procured and were defending their territory, thus buying-in, even without constructive changes, was deemed to be worth the tradeoff.  Perhaps Team A was behind in the technologies involved and would use the contract as a means of financing their gap.  Team A could have an excess of personnel with technical skills that are complimentary to those needed for the effort but who are otherwise not employed within their core competency, so rather than lose them it was worth bidding at or near cost for the perceived effort.  These are, of course, the most charitable assumed rationales, though the ones that I have most often encountered.

The real question in this case would be how, even given the judgment of the technical assessment team, the contracting officer would keep a proposal so far below the independent estimate to fall within the competitive range?  If the government’s requirements are so vague that two experienced contracting teams can fall so far apart, it should be apparent that the solicitation either defective or the scope is not completely understood.

I think it is this question that leads us to the more interesting aspects of acquisition, program, and project management.  For one, I am certain that a large acquisition like the one described is highly visible and of import to the political system and elected officials.  In the face of such scrutiny it would have to be a procuring contacting officer (PCO) of great experience and internal fortitude, confident in their judgment, to reset the process after proposals had been received.

There is also pressure in contracting from influencers within the requiring organizations that are under pressure to deploy systems to meet their needs as expeditiously as possible–especially after a fairly lengthy set of activities that must occur prior to the issuance of a solicitation.  The development of a good set of requirements is a process that involves multiple stakeholders on highly technical issues is one that requires a great deal of coordination and development by a centralized authority.  Absent such guidance the method of approaching requirements can be defective from the start.  For example, does the requiring organization write a Statement of Work, a Performance Work Statement, or a Statement of Objectives?  Which is most appropriate contract type for the work being performed and the risk involved?  Should there be one overriding approach or a combination of approaches based on the subsystems that make up the entire system?

But even given all of these internal factors there are others that are unique to our own time.  I think it would be interesting to see how these factors have affected the conditions that everyone in our discipline deems to be problematic.  This includes the reduced diversity of the industrial and information verticals upon which the acquisition and logistics systems rely; the erosion of domestic sources of expertise, manufactured materials, and commodities; the underinvestment in training and personnel development and retention within government that undermines necessary expertise; specialization within the contracting profession that separates the stages of acquisition into stovepipes that undermines continuity and cohesiveness; the issuance of patent monopolies that stifle and restrict competition and innovation; and unproductive rent seeking behavior on the part of economic elites that undermine the effectiveness of R&D and production-centric companies.  Finally, this also includes those government policies instituted since the early 1980s that support these developments.

The importance of any of these cannot be understated but let’s take the issue of rent seeking that has caused the “financialization” of almost all aspects of economic life as it relates to what a contracting officer must face when acquiring systems.  Private sector R&D, which mostly fell in response to economic dislocations in the past–but in a downward trend since the late 1960s overall and especially since the mid 1980s–has fallen precipitously since the bursting of the housing bubble and resultant financial crisis in 2007 with no signs of recovery.  Sequestration and other austerity measures in FY 2015 will at the same time will also negatively impact public R&D, continuing the trend overall with no offset.  This fall in R&D has a direct impact on productivity and undercuts the effectiveness of using all of the tools at hand to find existing technologies to offset the ones that require full R&D.  This appears to have caused a rise in intrinsic risk in the economy as a whole for efforts of this type, and it is this underlying risk that we see at the micro and project management level.