Don’t Be Cruel — Contract Withholds and the Failure of Digital Systems

Recent news over at Breaking Defense headlined a $25.7M withhold to Pratt & Whitney for the F135 engine.  This is the engine for the F35 Lightning II aircraft, also known as the Joint Strike Fighter (JSF).  The reason for the withhold in this particular case was for an insufficient cost and schedule business system that the company has in place to support project management.

The enforcement of withholds for deficiencies in business systems was instituted in August 2011.  These business systems include six areas:

  • Accounting
  • Estimating
  • Purchasing
  • EVMS (Earned Value Management System)
  • MMAS (Material Management and Accounting System), and
  • Government Property

As of November 30, 2013, $19 million had been held back from BAE Systems Plc, $5.2 million from Boeing Co., and $1.4 million Northrop Corporation.  These were on the heels of a massive $221 million held back from Lockheed Martin’s aeronautics unit for its deficient earned value management system.  In total, fourteen companies were impacted by withholds last year.

For those unfamiliar with the issue, these withholds may seem to be a reasonable enforcement mechanism that sufficient business systems are in place in order to ensure that there is traceability in the expenditure of government funds by contractors.  After all, given the disastrous state of affairs where there was massive loss of accountability by contractors in Iraq and Afghanistan, many senior personnel in DoD felt that there needed to be teeth given to contracting officer, and what better way to do this than through financial withholds?  The rationale is that if the systems are not adequate then the information originated from these systems is not credible.

This is probably a good approach for the acquisition of wartime goods and services, but doesn’t seem to fit the reality of the project management environment in which government contracting operates.  The strongest objections to the rule, I think, came from the legal community, most notably from the Bar Association’s Section of Public Contract Law.  Among these was that the amount of the withhold is based on an arbitrary percentage within the DFARS rule.  Another point made is that the defects in the systems in most cases are disconnected from actual performance and so redirect attention and resources away from the contractual obligation at hand.

These objections were made prior to the rule’s acceptance.  But now that the rule is being enforced the more important question is the effect of the withholds on project management.  My own anecdotal experience from having been a business manager in a program management staff is that the key to project success is oftentimes determined by cash flow.  While internal factors to the project, such as the effective construction of the integrated master schedule (IMS), performance management baseline (PMB), risk identification and handling, and performance tracking against these plans are the primary focus of project integrity, all too often the underlying financial constraints in which the project must operate is treated as a contingent factor.  If our capabilities due to financial constraints are severe, then the best plan in the world will not achieve the desired results because it fails to be realistic.

The principles that apply to any entrepreneurial enterprise also apply to complex projects.  It is true that large companies do have significant cash reserves and that these reserves have grown significantly since the 2007-2010 depression.  But a major program requires a large infusion of resources that constitutes a significant barrier to entry, and so such reserves contribute to the financial stability necessary to undertake such efforts.  Profit is not realized on every project.  This may sound surprising to those unfamiliar with public administration, but this is the case because it sometimes is worth breaking even or taking a slight loss so as not to lose essential organizational knowledge.  It takes years to develop an engineer who understands the interrelationships of the key factors in a jet fighter: the tradeoffs between speed, maneuverability, weight, and stress from the operational environment, like taking off from and landing on a large metal aircraft carrier that travels on salt water.  This cannot be taught in college nor can it be replaced if the knowledge is lost due to budget cuts, pay freezes, and sequestration.  Oftentimes, because of their size and complexity, project start-up costs much be financed using short term loans, adding risk when payments are delayed and work interrupted.  The withhold rule adds an additional, if not unnecessary, dimension of risk to project success.

Given that most of the artifacts that are deemed necessary to handle and reduce risk are done in a collaborative environment by the contractor-government project team through the Integrated Baseline Review (IBR) process and system validation–as well as pre-award certifications–it seems that there is no clear line of demarcation to place the onus of inadequate business systems on the contractor.  The reality of the situation, given cost-plus contracts for development contracts, is that industry is, in fact, a private extension of the defense infrastructure.

It is true that a line must be drawn in the contractual relationship to provide those necessary checks and balances to guard against fraud, waste, or a race to the lowest common denominator that undermines accountability and execution of the contractual obligation.  But this does not mean that the work of oversight requires another post-hoc layer of surveillance.  If we are not getting quality results from pre-award and post-award processes, then we must identify which ones are failing us and reform them.  Interrupting cash flow for inadequately assessed business systems may simply be counter-productive.

As Deming would argue, quality must be built into the process.  What defines quality must also be consistent.  That our systems are failing in that regard is indicative, I believe, in a failure of imagination on the part of our digital systems, on which most business systems rely.  It was fine in the first wave of microcomputer digitization in the 1980s and 1990s to simply design systems that mimicked the structure of pre-digital information specialization.  Cost performance systems were built to serve the needs of cost analysts, scheduling systems were designed for schedulers, risk systems for a sub-culture of risk specialists, and so on.

To break these stovepipes the response of the IT industry twofold, which constitutes the second wave of digitization of project management business processes.

One was in many ways a step back.  The mainframe culture in IT had been on the defensive since the introduction of the PC and “distributed” processing.  Here was an opening to reclaim the high ground and so expensive, hard-coded ERP, PPM, and BI systems were introduced.  The lack of security in deploying systems quickly in the first wave also provided the community with a convenient stalking horse, though even the “new” systems, as we have seen, lack adequate security in the digital arms race.  The ERP and BI systems are expensive and require specialized knowledge and expertise.  Solutions are hard-coded, require detailed modeling, and take a significant amount of time to deploy, supporting a new generation of coders.  The significant financial and personnel resources required to acquire and implement these systems–and the management reputation on the line for making the decision to acquire the systems in the first place–have become a rationale for their continued use, even when they fail at the same high rate of all IT development projects.  Thus, tradeoff analysis between sunk costs and prospective costs is rarely made in determining their sustainability.

Another response was to knit together the first wave, specialized systems in “best-of-breed” configurations.  In this case data is imported and reconciled between specialized systems to achieve integration needed to service the cross-functional nature of project management.  Oftentimes the estimating, IMS, PMB, and qualitative and quantitative risk artifacts are constructed by separate specialists with little or no coordination or fidelity.  These environments are characterized by workarounds, special resource-heavy reconciliation teams dedicated to verifying data between systems, the expenditure of resources in fixing errors after the fact, and in the development of Access and MS Excel-heavy one-off solutions designed to address deficiencies in the underlying systems.

That the deficiencies that are found in the solutions described above are mimicked in the findings of the deficiencies in business systems marks the culprits largely being the underlying information systems.  The solution, I think, is going to come from those portions of the digital community where the barriers to entry are low.  The current crop of software in place is reaching the end of its productive life from the first and second waves.  Hoping to protect market share and stave off the inevitable, new delivery and business models are being deployed by entrenched software companies, who have little incentive to drive the industry to the next phase.  Instead, they have been marketing SaaS and cloud computing as the panacea, though the nature of the work tends to militate against acceptance of external hosting.  In the end, I believe the answer is to leverage new technologies that eliminate the specialized and hard-coded nature of the first example, but achieve integration, while leveraging the existing historical data that exists in great abundance from the second example.

Note: The title and some portions of this post were modified from the original for clarity.