Shake it Out – Embracing the Future in Program Management – Part One: Program and Project Management in the Public Interest

I heard the song from which I derived the title to this post sung by Florence and the Machine and was inspired to sit down and write about what I see as the future in program management.

Thus, my blogging radio silence has ended as I begin to process and share my observations and essential achievements over the last couple of years.

Some of my reticence in writing has been due to the continual drumbeat of both outrageous and polarizing speech that had dominated our lives for four years. Combined with the resulting societal polarization, I was overwhelmed by the hyper-politicized environment which has fostered disinformation and dysfunction. Those who wish to seek my first and current word on this subject need only visit my blog post, “In Defense of Empiricism” at the AITS Blogging Alliance here.

It is hard to believe that I published that post four years ago. I stand by it today and believe that it remains as valid, if not more so, than it did when I wrote and shared it.

Finally, the last and most important reason for my relative silence has been that I have been hard at work putting my money and reputation where my blogging fingers have been—in the face of a pandemic that has transformed and transfigured our social and economic lives.

My company—the conduit that provides the insights I share here—is SNA Software LLC. We are a small, veteran-owned company and we specialize in data capture, transformation, contextualization and visualization. We do it in a way that removes significant effort in these processes, ensures reliability and trust, to incorporate off-the-shelf functionality that provides insight, and empowers the user by leveraging the power of open systems, especially in program and project management.

Program and Project Management in the Public Interest

There are two aspects to the business world that we inhabit: commercial and government; both, however, usually relate to some aspect of the public interest, which is our forte.

There are also two concepts about this subject to unpack.

The first is distinguishing between program and project management. In this concept, a program is an overarching effort that may consist of individual efforts that, together, will result in the production or completion of a system, whether that is a weapons system, a satellite, a spacecraft, or an engine. It could even be a dam or some other aspect of public works.

A project under this concept is a self-contained effort separated organizationally from the larger entity, which possesses a clearly defined start and finish, a defined and allocated budget, and a set of plans, a performance management feedback system, and overarching goals or “framing assumptions” that define what constitutes the state of being “done.”

Oftentimes the terms “program” and “project” are used interchangeably, but the difference for these types of efforts is important and goes beyond a shallow understanding of the semantics. A program will also consider the lifecycle of the program: the follow-on logistics, the interrelationship of the end item to other components that will constitute the deployed system or systems, and any iterative efforts relating to improvement, revision, and modernization.

A word on the term “portfolio” is also worth a mention in the context of our theme. A portfolio is simply a summary of the projects or programs under an organizational entity that has both reporting and oversight responsibility for them. They may be interrelated or independent in their efforts, but all must report in some way, either due to fiduciary, resource, or oversight concerns, to that overarching entity.

The second concept relates to the term “public interest.” Programs and projects under this concept are those that must address the following characteristics: legality, governance, complexity, integrity, leadership, oversight, and subject matter expertise. I placed these in no particular order.

What we call in modern times “public interest” was originally called “public virtue” by the founders of the United States, which embody the ideals of the American Revolution, and upon which our experiment in democratic republicanism is built. It consists of conducting oneself in a manner in which the good of the whole—the public—outweighs personal interests and pursuits. Self-dealing need not apply.

This is no idealistic form of self-delusion: I understand, as do my colleagues, that we are, at heart, a commercial profit-making enterprise. But the manner in which we engage with government requires a different set of rules and many of these rules are codified in law and ethical practice. While others do not always feel obliged to live by these rules, we govern ourselves and so choose to apply these virtues—and to seek to support and change our system to encourage such behavior to as to be the norm—even in direct interactions with government personnel where we feel these virtues have been violated.

Characteristics of Public Interest Programs

Thus, the characteristics outlined above apply to program and project management in the public interest in the following manner:

Legality: That Public Interest Programs are an artifact of law and statute and are specifically designed to benefit the public as a whole.

At heart, program and project management are based on contractual obligations, whether those instruments apply internally or externally. As a result, everyone involved in the program and project management discipline is, by default, part of the acquisition community and the acquisition process. The law that applies to all government acquisition systems is based on the Federal Acquisition Regulation (FAR). There are also oversight and fiduciary responsibilities that apply as a result of the need for accountability under the Congressional appropriations process as well as ethical standards that apply, such as those under the Truth in Negotiations Act (TINA). While broad in the management flexibility they allow, violations of these statutes come with serious consequences. Thus, as a basis for establishing hard and fast guardrails in the management of programs and projects. Individual government agencies and military services also publish additional standards that supplement the legal requirements. An example is the Department of Defense FAR Supplement (DFARS). Commercial entities that hold government contracts in relation to Program Management Offices (PMOs) must sign on to both FAR and agency contractual clauses, which will then flow down to their subcontractors. Thus, the enforcement of these norms is both structured and consistent.

Governance: That the Organizational Structure and Disciplines deriving from Public Interest Programs are a result of both Contract and Regulatory Practice under the concept of Government Sovereignty.

The government and supplier PMOs are formed as a result of a contractual obligation for a particular purpose. Government contracting is unique since government entities are the sovereign. In the case of the United States, the sovereign is the elected government of the United States, which derives its legitimacy from the people of the United States as a whole. Constitutionally, the Executive Branch is tasked with the acquisition responsibility, but the manner and method of this responsibility is defined by statute.

Thus, during negotiations and unlike in commercial practice, the commercial entity is always the offeror and the United States always the party that either accepts or rejects the offer (the acceptor). This relationship has ramifications in contract enforcement and governance of the effort after award. It also allows the government to dictate the terms of the award through its solicitations. Furthermore, provisions from law establish cases where the burden for performance is on the entity (the supplier) providing the supplies and services.

Thus, the establishment of the PMO and oversight organizations have a legal basis, aside from considerations of best business practice. The details of governance within the bounds of legal guidance are those that apply through agency administrative law and regulation, oftentimes based on best business practice. These detailed practices of governance are usually established as a result of hard-learned experience: establishment of disciplines (systems engineering and technical performance, planning, performance management, cost control, financial execution, schedule, and progress assessment), the periodicity of reporting, the manner of oversight, the manner of liaison between the supplier and government PMOs, and alignment to the organization’s goals.

Complexity: That Public Interest Programs possess a level of both technical and organizational complexity unequaled in the private sector.

Program and project management in government involves a level of complexity rarely found in similar non-governmental commercial efforts. Aligning the contractual requirements, as an example, to an assessment of the future characteristics of a fighter aircraft needed to support the U.S. National Defense Strategy, built on the assessments by the intelligence agencies regarding future threats, is a unique aspect of government acquisition.

Furthermore, while relying on the expertise of private industry of such systems that support national defense, as well as those that support space exploration, energy, and a host of other needs, the items being acquired, which require cost type R&D contracts that involve program management, by definition are those where the necessary solutions are not readily available as commercial end items.

Oftentimes these requirements are built onto and extend existing off-the-shelf capabilities. But given that government investment in R&D represents the majority of this type of spending in the economy, absent it, technology and other efforts directed to meeting defense, economic, societal, climate, and space exploration challenges of the future would most likely not be met—or those that do will benefit only a portion of the populace. The federal government uniquely possesses the legal legitimacy, resources, and expertise to undertake such R&D that, pushing the envelope on capabilities, involves both epistemic and aleatory risk that can be managed through the processes of program management.

Integrity: The conduct of Public Interest Programs demands the highest level of commitment to a culture of accountability, impartiality, ethical conduct, fiduciary responsibility, democratic virtues, and honesty.

The first level of accountability resides in the conduct of the program manager, who is the locus of integrity within the program management office. This requires a focus on the duties the position demands as a representative of the Government of the United States. Furthermore, the program manager must ensure that the program team operate within the constraints established by the program’s or project’s contractual commitments, and that it continues to work to meeting the program goals that align with the stated interests and goals of the organization. That these duties are exercised regardless of self-interest is the basis of integrity.

This is not an easy discipline, and individuals oftentimes cannot separate their own interests from those of their duties. Yet, without this level of commitment, the legitimacy of the program office and the governmental enterprise itself is threatened.

In prior years, as an active-duty Supply Corps officer, I came across cases where individuals in civil service or among the commissioned officer community confused their own interests—for promotion, for self-aggrandizement, for ego—with those duties demanded of their rank or position. Such confusions of interests are serious transgressions. With contracted-out positions within program offices adding consulting and staffing firms into the mix, with their oftentimes diversified interests and portfolios, an additional layer of challenges is presented. Self-promotion, competition, and self-dealing have all too often become blatant, and program managers would do well to enforce strict rules regarding such behavior.

The pressures of exigency are oftentimes the main cause of the loss of integrity of the program or project. Personal interrelationships and human resource management issues can also undermine good order and discipline necessary for the program or project to organize itself into a cohesive, working team that is focused on a common vision.

Key elements mentioned in our opening thesis regarding ethical conduct, adherence to democratic virtues which include acceptance of all members of the team regardless of color, ethnicity, race, sexual identity, religion, or place of national origin. People deserve the respect and decency deriving from their basic human rights to enjoy human dignity, as well as of their position. Adding to these elements include honesty and the willingness to accept and report bad news, which is essential to integrity.

An organization committed to the principle of accountability will seek to measure and ensure that the goals of the program or project are being met, and that ameliorative measures are taken to correct any deficiencies. Since these efforts oftentimes involve years of effort involving significant sums of public monies, fiduciary integrity is essential to this characteristic.

All of these elements can and should exist in private, commercial practices. The difference that makes this a unique characteristic to program management in the public interest is the level of scrutiny, reporting, and review that is conducted: from oversight agencies within the Executive Department of the government, to the Congressional oversight, hearing and review processes, agency review, auditing and reporting, and inquires and critiques by the press and the public. Public interest program management is life in a fishbowl, except in the most secret efforts, and even those will eventually be subject to scrutiny.

As with a U.S. Navy ship that makes a port of call in a foreign country, the actions of the conduct of crew will not only reflect on themselves or their ship, but on the United States; so it is also with our program offices. Thus, systems of programmatic governance and business management must anticipate in their structure the level of adherence required. Given the inherent level of risk involved in these efforts, and given the normal amount of error human systems create even with good intentions and expertise, establishing a system committed to the elements of integrity creates a self-correcting one better prepared to meet the program’s or project’s challenges.

Leadership: Programs in the Public Interest differ from equivalent commercial efforts in that management systems and incentives based on profit- and shareholder-orientations do not exist. Instead, a special kind of skillset is required that includes good business management principles and skills combined with highly developed leadership traits.

Management skills tend to be a subset of leadership, though in business schools and professional courses they tend to be addressed as co-equal. This is understandable in commercial enterprises that focus on the capitalistic pressures regarding profit and market share.

Given the unique pressures imposed by the elements of integrity, the program manager and the program team are thrown into a situation that requires a focus on the achievement of organizational goals. In the case of program and project management, this will be expressed in the form of a set of “framing assumptions” that roll into an overarching vision.

A program office, of course, is more than a set of systems, practices, and processes. It is, first and foremost, a collection of individuals consisting of subject matter experts and professionals who must be developed into a team committed to the vision. The effort to achieve this team commitment is one of the more emotional and compelling elements that comprise leadership.

Human systems are adaptive ones, complex, which react and are created by both incentives and sanctions. Every group, especially involving creative and talented people, starts out being a collection of individuals with the interrelations among the members in an immature state. Underlying the expression of various forms of ambition and self-identification among mature individuals is the basic human need for social acceptance, born from the individual personal need for love. This motivation exists psychologically in all individuals except for sociopaths. It is also the basis for empathy and the acceptance of the autonomy of others, which form the foundation for team building.

The goal of the leader is to encourage maturity among the members of the group. The result is to create that overused term “synergy.” This is accomplished by doing those things as a leader necessary to develop members of the group that fosters trust, acceptance, and mutual respect. Admiral James L. Holloway, Jr., in his missive on Naval Leadership, instructed his young officers to eschew any concept of perfectionism in people. People make mistakes. We know this if we are to be brutally honest about our own experiences and actions.

Thus, intellectual honesty and an understanding on what motivates people within their cultural mores, above all else, is essential to good leadership. Americans, by nature, tend to be skeptical and independently minded. They require a level of explanation and due diligence that is necessary to win over their commitment to a goal or vision. When it comes to professionals operating within public service in government—who take an oath to the Constitution and our system of laws—the ability to lead tends to be more essential than just good management skills, though the latter are by no means unimportant. Management in private enterprise assumes a contentious workplace of competing values and interests, and oftentimes fosters it.

Program and project management in the public interest cannot succeed in such an environment. It requires a level of commitment to the goals of the effort regardless of personal values or interests among the individual members of the team. That they must be convinced to this level of commitment ensures that the values of leadership not only operate at the top of the management chain, but also at each of the levels and lateral relationships that comprise the team.

The shorthand for leadership in this culture is that the leader is “working their way out of their job,” and “that in order to be a good leader one must be a good follower,” meaning that all members of the team are well-informed, that their contributions, expertise and knowledge is acknowledged and respected, that individual points of failure through the irreplaceable person syndrome are minimized, and that each member of a team or sub-team can step in or step up to keep the operation functioning. The motivating concept in these situations are the interests of the United States, in lieu of a set of stockholders or some fiduciary reward.

Finally, there is the concept of the burden of leadership. Responsibility can be can be delegated, but accountability cannot. Leadership in this context entails an obligation to take responsibility for both the mission of the organization and the ethical atmosphere established in its governance.

Oversight: While the necessity for integrity anticipates the level of accountability, scrutiny, oversight, and reporting for Programs in the Public Interest, the environment this encompasses is unique compared to commercial entities.

The basis for acquisition at the federal level resides in the Article Two powers of the president as the nation’s Chief Executive. Congress, however, under its Article One powers, controls appropriations and passes laws related to the processes, procedures and management of the Executive Branch.

Flowing from these authorities, the agencies within the federal government have created offices for the oversight of the public’s money, the methods of acquisition of supplies and services, and the management of contracts. Contracting Officers are given authority through a warrant to exercise their acquisition authority under the guidance and management of a senior acquisition authority.

Unlike in private business, the government operates under the concept of Actual Authority. That is, no one may commit the government except those possessing a warrant. Program Managers are appointed to provide control and administration of cost type efforts, especially those containing R&D, to shepherd these efforts over the course of what usually constitutes a multi-year effort. The Contracting Officer and/or the senior acquisition authority in these cases will delegate contract administration authority to the Program Manager. As such, it is a very powerful position.

The inherent powers of the Executive Branch and the Legislative Branches of government create a tension that is resolved through a separation of powers and the ability of one branch to—at least in most cases—check the excesses and abuses of the other: the concept of checks and balances, especially through the operation of oversight.

When these tensions cannot be resolved within the processes established for separation of powers, the third branch of government becomes involved: this is the Judicial Branch. The federal judiciary has the ability to review all laws of the United States, their constitutionality, and their adherence to the letter of the law in the case of statute.

Wherever power exists within the federal government there exists systems of checks and balances. The reason for this is clear, and Lord Acton’s warning about power corrupting and absolute power corrupting absolutely is the operational concept.

Congress passes statutes and the Judiciary interprets the law, but it is up to the Executive Branch through the appointed heads of the various departments of government down through the civil service and, in the case of the Department of Defense, the military chain of command under civilian authority, to carry out the day-to-day activities in executing the laws and business of the government. This creates a large base of administrative law and procedure.

Administrative Law and the resulting procedures in their implementation come about due to the complexities in the statutes themselves, the tests of certain provisions of the statutes in the interplay between the various branches of government, and the practicalities of execution. This body of law and procedure is oftentimes confused with “regulation” in political discussions, but it is actually the means of ensuring that the laws are faithfully executed without undue political influence. It is usually supplemented by ethical codes and regulations as well.

As a part of this ecosystem, the Program in the Public Interest must establish a discipline related to self-regulation, due diligence, good business practice, fiduciary control, ethical and professional conduct, responsibility, and accountability. Just as the branches of the federal government are constructed to ensure oversight and checks-and-balances, this also exists with normative public administration within the Executive Branch agencies.

This is often referred to both positively and, mostly among political polemicists in the negative, as the bureaucracy. The development of bureaucracies in government is noted by historians and political scientists as an indication of political stability, maturity, and expertise. Without bureaucracies, governments tend to be capricious and their policies uncertain. The practice of stare decisis—the importance of precedent in legal decisions—is also part and parcel of stability. Government power can be beneficial or coercive. Resting action on laws and not the whims or desires of the individual person is essential to the good order and discipline of the federal government.

As such, program and project managers, given the extensive latitude and inherent powers of their position, are subject to rigorous reporting, oversight, and accountability regimes in the performance of their duties. In R&D cost-type program and project management efforts, the risk is shared between the supplier and the government. And the government flows down this same regime to the contractor to ensure the integrity of the effort in the expenditure of public monies and under the performance and delivery of public contacts.

This leads us to the last important aspect of oversight: public scrutiny, which also includes the press as the Fourth Estate. When I was a young Lieutenant in the Navy working in contracts the senior officer to whom I was assign often remarked: “Never do anything that would cause you to be ashamed were it to end up being read by your grandmother in the Washington Post.”

Unlike private business where law, contractual obligation, and fiduciary responsibility are the main pressures on tolerated behavior, the government and its actions are—and must be—under constant public scrutiny. It is expected. Senior managers who champ against the bit of this check on official conduct misunderstand their role. Even the appearance of malfeasance or abuse can cause one to steer into the rocks and shoals.

Subject Matter Expertise: Given the interrelated characteristics of legality, governance, complexity, integrity, leadership, and oversight—linked to the development of a professional, permanent bureaucracy acting through a non-partisan civil service—the practices necessary to successfully shepherd such efforts has produced areas of expertise and specialization. These areas provide a basis for leveraging technology in gaining insight into meeting all of the requirements necessary to the good administration and control of Program Management in the Public Interest.

The structures and practices of program and project management are reflected in the private economy. Some of this is contractually prescribed and some of it is based on best business practice learned through hard experience. In the interplay of government and industry, most often an innovation in one has been refined and improved in the other, only to find its way back to practice on the originating “side” of the transaction.

Initially in our history this cross-fertilization occurred through extraordinary wartime measures: standardization of rifled weaponry passed down by Thomas Jefferson and Eli Whitney, and for railroad track gauge standards issued by the Union government during the Civil War, are just two examples that turned out to provide a decisive advantage against laissez faire and libertarian approaches.

As the complexity of private business concerns, particularly in the international sphere, began to mimic—and in many cases surpass—the size and technical complexity of many individual government efforts, partnerships with civil authorities and private businesses saw the need for industry standardization for both electrical and non-electrical components and processes. The former was particularly important in the “Current Wars” between Edison and Westinghouse.

These simple and earlier examples highlight the great conundrum of standardization of supply, practice and procedure in acquisition: the need for economy through competition of many sources for any particular commodity or item weighed against the efficiency and interoperability needed to continue operations. Buying multiple individual items with the same function but produced using differing standards creates a nightmare of suboptimization. Overly restrictive standards can and have had the effect of reducing competition and stifling innovation, especially if the standard is proprietary.

In standards setting there are several interests involved that must be taken into account: the technical expertise (technical, qualitative, etc.) that underlies the standard, the public interest in ensuring a healthy marketplace that rewards innovation, diversity, and price competitiveness, the need for business-to-business cooperation and synergy in the marketplace, and the preponderance of practice, among others. In the Defense industry this also includes national security concerns.

This last consideration provides an additional level of tension between private industry and government interests. In the competition for market share and market niches, businesses are playing a zero-sum game that shifts between allies and competitors. Still, the interest of individual actors is focused on making a proprietary product or service dominant in the target market.

Government, on the other hand, particularly one that operates as a republic based on democratic processes and virtues and a commitment to equal rights, has a different set of interests that are, in many cases, diametrically opposed to those of individual players in the marketplace. Government needs and desires a broad choice of sources for what it needs, while ensuring that qualitative standards are met under a fair and reasonable price. When it does find innovation, it seeks to reward it, but only for the limited terms, conditions, and period of the contractual instrument.

The greater the risk in these cases—especially when cost risk is shared—the greater the need for standards, especially qualitative ones. The longer the term of the effort, the greater the need for checks and balances through evaluation, review, and oversight. The greater the dollar value, the greater importance for fiduciary and contractual accountability.

Thus, subject matter expertise has evolved over time, aligned with the functions and end items being developed and delivered. These areas include:

Estimating – A critical part of program and project management, this is a discipline with highly specialized quantitative methods for estimating and projecting project costs, resources, and duration. It is part of the planning phase prior to program or project inception. It can be used to support budget planning prior to program approval, during negotiations and, after award, to inform the project plan.

Systems Engineering – as described by the International Council of Systems Engineering, “a transdisciplinary and integrative approach to enable the successful realization, use, and retirement of engineered systems, using systems principles and concepts, and scientific, technological, and management methods.”

As it relates to program and project management, the technical documents related to providing the basis and structure of the lifecycle management of the end item application, including the application of technical standards, measures of effectiveness, measures of performance, key performance parameters, and technical performance measures. In simplistic terms, systems engineering defines when the item under R&D reaches the state of “done.”

Financial Management – at the program and project management level, the planning, organizing, directing and controlling the financial activities such as procurement and utilization of funds to adhere to the limitations of law and consistent with the terms and conditions of the contract and the its ancillary planning and execution documents.

At its core, financial management within this discipline includes the planning, programming, budgeting, and execution process for the financial requirements of successful program execution. As with any individual enterprise, cashflow for required activities with the right type of money determined by Congressional appropriation presents a unique and specialized skillset under program management in the public interest. Oftentimes the lack of funds necessary to address a particular programmatic risk or challenge can be just as decisive to program execution and success as any technical challenge.

Risk and Uncertainty – the concept of risk and uncertainty have evolved over time. Under classical economics (both Keynes and Knight), risk is where all of the future events and consequences of an action are known, but where specific outcomes are unknown. As such, probability calculus is applied to determine the risk management: mitigation and handling. Uncertainty, under this definition, is unknowable events that will result from our actions and is implicit in human action. There is no probability calculus or risk buy-down that can address areas of uncertainty. These definitions are also accepted under the concept of complexity economics.

My good colleague Glen Alleman (2013) at his blog, Herding Cats, casts risk as a product of uncertainty. This is a reordering of definitions, but not unuseful. Under Glen’s approach, uncertainty is broken into aleatory and epistemic uncertainty. The first—aleatory—comes from a random process, what Keynes, Knight, et al. would define as classical uncertainty. The second—epistemic—comes from lack of knowledge. The first is irreducible, which is consistent with classical economics and complexity economics; the second is subject to probability analysis and risk handling methodologies.

Both risk and uncertainty—aleatory and epistemic—occur within all phases and under each discipline within the project management environment. Any human action involves these forces of cause-and-effect and uncertainty—and limit our actions under the concept of “free will.”

Planning and Scheduling – usually these have been viewed as separate entities, but they are, in fact, part of a continuum, as are all of the disciplines mentioned, but more on that later in these blogs.

Planning involves the ability to derive the products of both the contract terms and conditions, and the systems engineering process. The purpose is to develop a high-level, time-phased plan that captures program events, deliverables, requirements, significant accomplishment criteria, and basic technical performance management achievement that will be the basis for a more detailed integrated master schedule.

The scheduling discipline is tasked with further delineating the summary tasks into schedule activities based on critical path methodology. A common refrain when I worked on the government side of program management was that you cannot eat an elephant in one gulp: you have to eat it one piece at a time.

As it relates to this portion of project methodology, I have, over the years, heard people say that planning and scheduling is more of an art instead of a science. Yet, the artifacts upon which our planning documents rest exist as part of the acquisition process and our systems and procedures are mature and largely standardized. The methods of systems engineering are precise and consistent.

The lexicon of planning and scheduling, regardless of the software applications or manual methods used, describe the same phenomenon and concepts, despite slightly different—and oftentimes proprietary—terminology. The concept of critical path analysis is well documented in the literature with slight, though largely insignificant, differences in application.

What appears as art is, in reality, a process that involves a great deal of complexity because these are the documents upon which all of the moving parts of the program are documented. Rather than art, it is a discipline that requires attention to detail and collaboration, aside from the power of computing.

Resource Management – as with planning and scheduling, resource management consists of a detailed accounting of the people, equipment, monies, and suppliers that are required to achieve the activities detailed in the program schedule.

In the detailed and specialized planning of projects and programs in the public interest, these efforts are cross-referenced and further delineated to the actual work that needs to be completed. A Work Breakdown Structure (or WBS), is the method of time-phasing the work using detailed tasks that integrate scope, cost, and schedule at the lowest level of achievement.

Baselining and Performance Management – are essential for project control in this environment. In this case, project and program schedule, cost, and resources are (ideally) risk adjusted and a performance management baseline is established: the basis for the assessment and control of the project.

This leads us to the methodology that is always on the cusp of being the Ozymandias of program management: earned value management or EVM. The discipline of EVM arose out of the Space Age era of the 1960s. The premise is simple: when undertaking any complex effort there is a finite amount of money and resources, and a target date for the needed end item. We need a method to determine whether the actual work performed in terms of budgeted resources and time is tracking to the plan to produce the desired end item application.

When looking at the utility of EVM, one must ask: while each of the disciplines noted above also track achievement over the lifecycle of the project or program, do any combine an analysis against budgeted time and resources? The answer is no, and so EVM is essential to management of these efforts.

Still, our other disciplines also track important information that is not captured by EVM. Thus, the entire corpus of our disciplines represents the project and program ecosystem. These processes, procedures, and the measures derived from them are interconnected. It is this salient fact that points us in the direction regarding the future of program management.

Conclusions from Part One

Given that we have outlined the unique and distinctive characteristics of public interest program management, the environment and basis upon which such program management rests, and the highly developed disciplines that have evolved as a result of the experience in system development, deployment, and lifecycle management, our inquiry must next explore the evolutionary nature of the program organization itself. Once identified and delineated, we must then determine the place of program organization within the context of developments in systems and information theory which will give us insight into the future of program management.

Back to School Daze Blogging–DCMA Investigation on POGO, DDSTOP, $600 Ashtrays,and Epistemic Sunk Costs

Family summer visits and trips are in the rear view–as well as the simultaneous demands of balancing the responsibilities of a, you know, day job–and so it is time to take up blogging once again.

I will return to my running topic of Integrated Program and Project Management in short order, but a topic of more immediate interest concerns the article that appeared on the website for pogo.org last week entitled “Pentagon’s Contracting Gurus Mismanaged Their Own Contracts.” Such provocative headlines are part and parcel of organizations like POGO, which have an agenda that seems to cross the line between reasonable concern and unhinged outrage with a tinge conspiracy mongering. But the content of the article itself is accurate and well written, if also somewhat ripe with overstatement, so I think it useful to unpack what it says and what it means.

POGO and Its Sources

The source of the article comes from three sources regarding an internal Defense Contract Management Agency (DCMA) IT project known as the Integrated Workflow Management System (IWMS). These consist of a September 2017 preliminary investigative report, an April 2018 internal memo, and a draft of the final report.

POGO begins the article by stating that DCMA administers over $5 trillion in contracts for the Department of Defense. The article erroneously asserts that it also negotiates these contracts, apparently not understanding the process of contract oversight and administration. The cost of IWMS was apparently $46.6M and the investigation into the management and administration of the program was initiated by the then-Commander of DCMA, Lieutenant General Wendy Masiello, shortly before she retired from the government in May 2017.

The implication here, given the headline, seems to be that if there is a problem in internal management within the agency, then that would translate into questioning its administration of the $5 trillion in contract value. I view it differently, given that I understand that there are separate lines of responsibility in the agency that do not overlap, particularly in IT. Of the $46.6M there is a question of whether $17M in value was properly funded. More on this below, but note that, to put things in perspective, $46.6M is .000932% of DCMA’s oversight responsibility. This is aside from the fact that the comparison is not quite correct, given that the CIO had his own budget, which was somewhat smaller and unrelated to the $5 trillion figure. But I think it important to note that POGO’s headline and the introduction of figures, while sounding authoritative, are irrelevant to the findings of the internal investigation and draft report. This is a scare story using scare numbers, particularly given the lack of context. I had some direct experience in my military career with issues inspired by the POGO’s founders’ agenda that I will cover below.

In addition to the internal investigation on IWMS, there was also an inspector general (IG) investigation of thirteen IT services contracts that resulted in what can only be described as pedestrian procedural discrepancies that are easily correctable, despite the typically overblown language found in most IG reports. Thus, I will concentrate on this post on the more serious findings of the internal investigation.

My Own Experience with DCMA

A note at this point on full disclosure: I have done business with and continue to do business with DCMA, both as a paid supplier of software solutions, and have interacted with DCMA personnel at publicly attended professional forums and workshops. I have no direct connection, as far as I am aware, to the IWMS program, though given that the assessment is to the IT organization, it is possible that there was an indirect relationship. I have met Lieutenant General Masiello and dealt with some of her subordinates not only during her time at DCMA, but also in some of her previous assignments in Air Force. I always found her to be an honest and diligent officer and respect her judgment. Her distinguished career speaks for itself. I have talked on the telephone to some of the individuals mentioned in the article on unrelated matters, and was aware of their oversight of some of my own efforts. My familiarity with all of them was both businesslike and brief.

As a supplier to DCMA my own contracts and the personnel that administer them were, from time-to-time, affected by the fallout from what I now know to have occurred. Rumors have swirled in our industry regarding the alleged mismanagement of an IT program in DCMA, but until the POGO article, the reasons for things such as a temporary freeze and review of existing IT programs and other actions were viewed as part and parcel of managing a large organization. I guess the explanation is now clear.

The Findings of the Investigation

The issue at hand is largely surrounding the method of source selection, which may have constituted a conflict of interest, and the type of money that was used to fund the program. In reading the report I was reminded of what Glen Alleman recently wrote in his blog entitled “DDSTOP: The Saga Continues.” The acronym DDSTOP means: Don’t Do Stupid Things On Purpose.

There is actually an economic behavioral principle for DDSTOP that explains why people make and double down on bad decisions and irrational beliefs. It is called epistemic sunk cost. It is what causes people to double down in gambling (to the great benefit of the house), to persist in mistaken beliefs, and, as stated in the link above, to “persist with the option which they have already invested in and resist changing to another option that might be more suitable regarding the future requirements of the situation.” The findings seem to document a situation that fits this last description.

In going over the findings of the report, it appears that IWMS’s program violated the following:

a. Contractual efforts in the program that were appropriate for the use of Research, Development, Test and Evaluation (R,D,T & E) funds as opposed to those appropriate for O&M (Operations and Maintenance) funds. What the U.S. Department of Defense calls “color of money.”

b. Amounts that were expended on contract that exceeded the authorized funding documents, which is largely based on the findings regarding the appropriate color of money. This would constitute a serious violation known as an Anti-Deficiency Act violation which, in layman’s terms, is directed to punish public employees for the misappropriation of government funds.

c. Expended amounts of O&M that exceeded the authorized levels.

d. Poor or non-existent program management and cost performance management.

e. Inappropriate contracting vehicles that, taken together, sidestepped more stringent oversight, aside from the award of a software solutions contract to the same company that defined the agency’s requirements.

Some of these are procedural and some are serious, particularly the Anti-deficiency Act (ADA) violations, are serious. In the Contracting Officer’s rulebook, you can withstand pedestrian procedural and administrative findings that are part and parcel of running an intensive contracting organization that acquires a multitude of supplies and services under deadline. But an ADA violation is the deadly one, since it is a violation of statute.

As a result of these findings, the recommendation is for DCMA to lose acquisition authority over the DoD micro-contracting level ($10,000). Organizationally and procedurally, this is a significant and mission-disruptive recommendation.

The Role and Importance of DCMA

DCMA performs an important role in contract compliance and oversight to ensure that public monies are spent properly and for the intended purpose. They perform this role mostly on contracts that are negotiated and entered into by other agencies and the military services within the Department of Defense, where they are assigned contract administration duties. Thus, the fact that DCMA’s internal IT acquisition systems and procedures were problematic is embarrassing.

But some perspective is necessary because there is a drive by some more extreme elements in Congress and elsewhere that would like to see the elimination of the agency. I believe that this would be a grave mistake. As John F. Kennedy is quoted as having said: “You don’t tear your fences down unless you know why they were put up.”

For those of you who were not around prior to the formation of DCMA or its predecessor organization, the Defense Contract Management Command (DCMC), it is important to note that the formation of the agency is a result of acquisition reform. Prior to 1989 the contract administration services (CAS) capabilities of the military services and various DoD offices varied greatly in capability, experience, and oversight effectiveness.Some of these duties had been assigned to what is now the Defense Logistics Agency (DLA), but major acquisition contracts remained with the Services.

For example, when I was on active duty as a young Navy Supply Corps Officer as part of the first class that was to be the Navy Acquisition Corps, I was taught cradle-to-grave contracting. That is, I learned to perform customer requirements development, economic analysis, contract planning, development of a negotiating position, contract negotiation, and contract administration–soup to nuts. The expense involved in developing and maintaining the skill set required of personnel to maintain such a broad-based expertise is unsustainable. For analogy, it is as if every member of a baseball club must be able to play all nine positions at the same level of expertise; it is impossible.

Furthermore, for contract administration a defense contractor would have contractual obligations for oversight in San Diego, where I was stationed, that were different from contracts awarded in Long Beach or Norfolk or any of the other locations where a contracting office was located. Furthermore, the military services, having their own organizational cultures, provided additional variations that created a plethora of unique requirements that added cost, duplication, inconsistency, and inter-organizational conflict.

This assertion is more than anecdotal. A series of studies were commissioned in the 1980s (the findings of which were subsequently affirmed) to eliminate duplication and inconsistency in the administration of contracts, particularly major acquisition programs. Thus, DCMC was first established under DLA and subsequently became its own agency. Having inherited many of the contracting field office, the agency has struggled to consolidate operations so that CAS is administered in a consistent manner across contracts. Because contract negotiation and program management still resides in the military services, there is a natural point of conflict between the services and the agency.

In my view, this conflict is a healthy one, as all power in the hands of a single individual, such as a program manager, would lead to more fraud, waste, and abuse, not less. Internal checks and balances are necessary in proper public administration, where some efficiency is sacrificed to accountability. It is not just the goal of government to “make the trains run on time”, but to perform oversight of the public’s money so that there is accountability in its expenditure, and integrity in systems and procedures. In the case of CAS, it is to ensure that what is being procured actually gets delivered in conformance to the contract terms and conditions designed to reduce the inherent risk in complex acquisition programs.

In order to do its job effectively, DCMA requires innovative digital systems to allow it to perform its CAS function. As a result, the agency must also possess an acquisition capability. Given the size of the task at hand in performing CAS on over $5 trillion of contract effort, the data involved is quite large, and the number of personnel geographically distributed. The inevitable comparisons to private industry will arise, but few companies in the world have to perform this level of oversight on such a large economic scale, which includes contracts comprising every major supplier to the U.S. Department of Defense, involving detailed knowledge of the management control systems of those companies that receive the taxpayer’s money. Thus, this is a uniquely difficult job. When one understands that in private industry the standard failure rate of IT projects is more than 70% percent, then one cannot help but be unimpressed by these findings, given the challenge.

Assessing the Findings and Recommendations

There is a reason why internal oversight documents of this sort stay confidential–it is because these are preliminary/draft findings and there are two sides to every story which may lead to revisions. In addition, reading these findings without the appropriate supporting documentation can lead one to the wrong impression and conclusions. But it is important to note that this was an internally generated investigation. The checks and balances of management oversight that should occur, did occur. But let’s take a close look at what the reports indicate so that we can draw some lessons. I also need to mention here that POGO’s conflation of the specific issues in this program as a “poster child” for cost overruns and schedule slippage displays a vast ignorance of DoD procurement systems on the part of the article’s author.

Money, Money, Money

The core issue in the findings revolves around the proper color of money, which seems to hinge on the definition of Commercial-Off-The-Shelf (COTS) software and the effort that was expended using the two main types of money that apply to the core contract: RDT&E and O&M.

Let’s take the last point first. It appears that the IWMS effort consisted of a combination of COTS and custom software. This would require acquisition, software familiarization, and development work. It appears that the CIO was essentially running a proof-of-concept to see what would work, and then incrementally transitioned to developing the solution.

What is interesting is that there is currently an initiative in the Department of Defense to do exactly what the DCMA CIO did as part of his own initiative in introducing a new technological approach to create IWMS. It is called Other Transactional Authority (OTA). The concept didn’t exist and was not authorized until the 2016 NDAA and is given specific statutory authority under 10 U.S.C. 2371b. This doesn’t excuse the actions that led to the findings, but it is interesting that the CIO, in taking an incremental approach to finding a solution, also did exactly what was recommended in the 2016 GAO report that POGO references in their article.

Furthermore, as a career Navy Supply Corps Officer, I have often gotten into esoteric discussions in contracts regarding the proper color of money. Despite the assertion of the investigation, there is a lot of room for interpretation in the DoD guidance, not to mention a stark contrast in interpreting the proper role of RDT&E and O&M in the procurement of business software solutions.

When I was on the NAVAIR staff and at OSD I ran into the difference in military service culture where what Air Force financial managers often specified for RDT&E would never be approved by Navy financial managers where, in the latter case, they specified that only O&M dollars applied, despite whether development took place. Given that there was an Air Force flavor to the internal investigation, I would be interested to know whether the opinion of the investigators in making an ADA determination would withstand objective scrutiny among a panel of government comptrollers.

I am certain that, given the differing mix of military and civil service cultures at DCMA–and the mixed colors of money that applied to the effort–that the legal review that was sought to resolve the issue. One of the principles of law is that when you rely upon legal advice to take an action that you have a defense, unless your state of mind and the corollary actions that you took indicates that you manipulated the system to obtain a result that shows that you intended to violate the law. I just do not see that here, based on what has been presented in the materials.

It is very well possible that an inadvertent ADA violation occurred by default because of an improper interpretation of the use of the monies involved. This does not rise to the level of a scandal. But going back to the confusion that I have faced from my own experiences on active duty, I certainly hope that this investigation is not used as a precedent to review all contracts under the approach of accepting a post-hoc alternative interpretation by another individual who just happens to be an inspector long after a reasonable legal determination was made, regardless of how erroneous the new expert finds the opinion. This is not an argument against accountability, but absent corruption or criminal intent, a legal finding is a valid defense and should stand as the final determination for that case.

In addition, this interpretation of RDT&E vs. O&M relies upon an interpretation of COTS. I daresay that even those who throw that term around and who are familiar with the FAR fully understand what constitutes COTS when the line between adaptability and point solutions is being blurred by new technology.

Where the criticism is very much warranted are those areas where the budget authority would have been exceeded in any event–and it is here that the ADA determination is most damning. It is one thing to disagree on the color of money that applies to different contract line items, but it is another to completely lack financial control.

Part of the reason for lack of financial control was the absence of good contracting practices and the imposition of program management.

Contracts 101

While I note that the CIO took an incremental approach to IWMS–what a prudent manager would seem to do–what was lacking was a cohesive vision and a well-informed culture of compliance to acquisition policy that would avoid even the appearance of impropriety and favoritism. Under the OTA authority that I reference above as a new aspect of acquisition reform, the successful implementation of a proof-of-concept does not guarantee the incumbent provider continued business–salient characteristics for the solution are publicized and the opportunity advertised under free and open competition.

After all, everyone has their favorite applications and, even inadvertently, an individual can act improperly because of selection bias. The procurement procedures are established to prevent abuse and favoritism. As a solution provider I have fumed quite often where a selection was made without competition based on market surveys or use of a non-mandatory GSA contract, which usually turn out to be a smokescreen for pre-selection.

There are two areas of fault on IMWS from the perspective of acquisition practice, and another in relation to program management.

These are the initial selection of Apprio, which had laid out the initial requirements and subsequently failed to have the required integration functionality, and then, the selection of Discover Technologies under a non-mandatory GSA Blanket Purchase Agreement (BPA) contract under a sole source action. Furthermore, the contract type was not appropriate to the task at hand, and the arbitrary selection of Discover precluded the agency finding a better solution more fit to its needs.

The use of the GSA BPA allowed managers, however, to essentially spit the requirements to stay below more stringent management guidelines–an obvious violation of acquisition regulation that will get you removed from your position. This leads us to what I think is the root cause of all of these clearly avoidable errors in judgment.

Program Management 101

Personnel in the agency familiar with the requirements to replace the aging procurement management system understood from the outset that the total cost would probably fall somewhere between $20M and $40M. Yet all effort was made to reduce the risk by splitting requirements and failing to apply a programmatic approach to a clearly complex undertaking.

This would have required the agency to take the steps to establish an acquisition strategy, open the requirement based on a clear performance work statement to free and open competition, and then to establish a program management office to manage the effort and to allow oversight of progress and assessment of risks in a formalized environment.

The establishment of a program management organization would have prevented the lack of financial control, and would have put in place sufficient oversight by senior management to ensure progress and achievement of organizational goals. In a word, a good deal of the decision-making was based on doing stupid things on purpose.

The Recommendations

In reviewing the recommendations of the internal investigation, I think my own personal involvement in a very similar issue from 1985 will establish a baseline for comparison.

As I indicated earlier, in the early 1980s, as a young Navy commissioned officer, I was part of the first class of what was to be the Navy Acquisition Corps, stationed at the Supply Center in San Diego, California. I had served as a contracting intern and, after extensive education through the University of Virginia Darden School of Business, the extended Federal Acquisition Regulation (FAR) courses that were given at the time at Fort Lee, Virginia, and coursework provided by other federal acquisition organizations and colleges, I attained my warrant as a contracting officer. I also worked on acquisition reform issues, some of which were eventually adopted by the Navy and DoD.

During this time NAS Miramar was the home of Top Gun. In 1984 Congressman Duncan Hunter (the elder not the currently indicted junior of the same name, though from the same San Diego district), inspired by news of $7,600 coffee maker and a $435 hammer publicized by the founders of POGO, was given documents by a disgruntled employee at the base regarding the acquisition of replacement E-2C ashtrays that had a cost of $300. He presented them to the Base Commander, which launched an investigation.

I served on the JAG investigation under the authority of the Wing Commander regarding the acquisitions and then, upon the firing of virtually the entire chain of command at NAS Miramar, which included the Wing Commander himself, became the Officer-in-Charge of Supply Center San Diego Detachment NAS Miramar. Under Navy Secretary Lehman’s direction I was charged with determining the root cause of the acquisition abuses and given 60-90 days to take immediate corrective action and clear all possible discrepancies.

I am not certain who initiated the firings of the chain of command. From talking with contemporaneous senior personnel at the time it appeared to have been instigated in a fit of pique by the sometimes volcanic Secretary of Defense Caspar Weinberger. While I am sure that Secretary Weinberger experienced some emotional release through that action, placed in perspective, his blanket firing of the chain of command, in my opinion, was poorly advised and counterproductive. It was also grossly unfair, given what my team and I found as the root cause.

First of all, the ashtray was misrepresented in the press as a $600 ashtray because during the JAG I had sent a sample ashtray to the Navy industrial activity at North Island with a request to tell me what the fabrication of one ashtray would cost and to provide the industrial production curve that would reduce the unit price to a reasonable level. The figure of $600 was to fabricate one. A “whistleblower” at North Island took this slice of information out of context and leaked it to the press. So the $300 ashtray, which was bad enough, became the $600 ashtray.

Second, the disgruntled employee who gave the files to Congressman Hunter had been laterally assigned out of her position as a contracting officer by the Supply Officer because of the very reason that the pricing of the ashtray was not reasonable, among other unsatisfactory performance measures that indicated that she was not fit to perform those duties.

Third, there was a systemic issue in the acquisition of odd parts. For some reason there was an ashtray in the cockpit of the E-2C. These aircraft were able to stay in the air an extended period of time. A pilot had actually decided to light up during a local mission and, his attention diverted, lost control of the aircraft and crashed. Secretary Lehman ordered corrective action. The corrective action taken by the squadron at NAS Miramar was to remove the ashtray from the cockpit and store them in a hangar locker.

Four, there was an issue of fraud. During inspection the spare ashtrays were removed and deposited in the scrap metal dumpster on base. The tech rep for the DoD supplier on base retrieved the ashtrays and sold them back to the government for the price to fabricate one, given that the supply system had not experienced enough demand to keep them in stock.

Fifth, back to the systemic issue. When an aircraft is to be readied for deployment there can be no holes representing missing items in the cockpit. A deploying aircraft with this condition is then grounded and a high priority “casuality report” or CASREP is generated. The CASREP was referred to purchasing which then paid $300 for each ashtray. The contracting officer, however, feeling under pressure by the high priority requisition, did not do due diligence in questioning the supplier on the cost of the ashtray. In addition, given that several aircraft deploy, there were a number of these requisitions that should have led the contracting officer to look into the matter more closely to determine price reasonableness.

Furthermore, I found that buying personnel were not properly trained, that systems and procedures were not established or enforced, that the knowledge of the FAR was spotty, and that procurements did not go through multiple stages of review to ensure compliance with acquisition law, proper documentation, and administrative procedure.

Note that in the end this “scandal” was born by a combination of systemic issues, poor decision-making, lack of training, employee discontent, and incompetence.

I successfully corrected the issues at NAS Miramar during the prescribed time set by the Secretary of the Navy, worked with the media to instill public confidence in the system, built up morale, established better customer service, reduced procurement acquisition lead times (PALT), recommended necessary disciplinary action where it seemed appropriate, particularly in relation to the problematic employee, recovered monies from the supplier, referred the fraud issues to Navy legal, and turned over duties to a new chain of command.

NAS Miramar procurement continued to do its necessary job and is still there.

What the higher chain of command did not do was to take away the procurement authority of NAS Miramar. It did not eliminate or reduce the organization. It did not close NAS Miramar.

It requires leadership and focus to take effective corrective action to not only fix a broken system, but to make it better while the corrective actions are being taken. As I outlined above, DCMA performs an essential mission. As it transitions to a data-driven approach and works to reduce redundancy and inefficiency in its systems, it will require more powerful technologies to support its CAS function, and the ability to acquire those technologies to support that function.

Takin’ Care of Business — Information Economics in Project Management

Neoclassical economics abhors inefficiency, and yet inefficiencies exist.  Among the core issues that create inefficiencies is the asymmetrical nature of information.  Asymmetry is an accepted cornerstone of economics that leads to inefficiency.  We can see in our daily lives and employment the effects of one party in a transaction having more information than the other:  knowing whether the used car you are buying is a lemon, measuring risk in the purchase of an investment and, apropos to this post, identifying how our information systems allow us to manage complex projects.

Regarding this last proposition we can peel this onion down through its various levels: the asymmetry in the information between the customer and the supplier, the asymmetry in information between the board and stockholders, the asymmetry in information between management and labor, the asymmetry in information between individual SMEs and the project team, etc.–it’s elephants all the way down.

This asymmetry, which drives inefficiency, is exacerbated in markets that are dominated by monopoly, monopsony, and oligopoly power.  When informed by the work of Hart and Holmström regarding contract theory, which recently garnered the Nobel in economics, we have a basis for understanding the internal dynamics of projects in seeking efficiency and productivity.  What is interesting about contract theory is that it incorporates the concept of asymmetrical information (labeled as adverse selection), but expands this concept in human transactions at the microeconomic level to include considerations of moral hazard and the utility of signalling.

The state of asymmetry and inefficiency is exacerbated by the patchwork quilt of “tools”–software applications that are designed to address only a very restricted portion of the total contract and project management system–that are currently deployed as the state of the art.  These tend to require the insertion of a new class of SME to manage data by essentially reversing the efficiencies in automation, involving direct effort to reconcile differences in data from differing tools. This is a sub-optimized system.  It discourages optimization of information across the project, reinforces asymmetry, and is economically and practically unsustainable.

The key in all of this is ensuring that sub-optimal behavior is discouraged, and that those activities and behaviors that are supportive of more transparent sharing of information and, therefore, contribute to greater efficiency and productivity are rewarded.  It should be noted that more transparent organizations tend to be more sustainable, healthier, and with a higher degree of employee commitment.

The path forward where there is monopsony power, where there is a dominant buyer, is to impose the conditions for normative behavior that would otherwise be leveraged through practice in a more open market.  For open markets not dominated by one player as either supplier or seller, instituting practices that reward behavior that reduces the effects of asymmetrical information, and contracting disincentives in business transactions on the open market is the key.

In the information management market as a whole the trends that are working against asymmetry and inefficiency involve the reduction of data streams, the construction of cross-domain data repositories (or reservoirs) that allow for the satisfaction of multiple business stakeholders, and the introduction of systems that are more open and adaptable to the needs of the project system in lieu of a limited portion of the project team.  These solutions exist, yet their adoption is hindered because of the long-term infrastructure that is put in place in complex project management.  This infrastructure is supported by incumbents that are reinforcing to the status quo.  Because of this, from the time a market innovation is introduced to the time that it is adopted in project-focused organizations usually involves the expenditure of several years.

This argues for establishing an environment that is more nimble.  This involves the adoption of a series of approaches to achieve the goals of broader information symmetry and efficiency in the project organization.  These are:

a. Instituting contractual relationships, both internally and externally, that encourage project personnel to identify risk.  This would include incentives to kill efforts that have breached their framing assumptions, or to consolidate progress that the project has achieved to date–sending it as it is to production–while killing further effort that would breach framing assumptions.

b. Institute policy and incentives on the data supply end to reduce the number of data streams.  Toward this end both acquisition and contracting practices should move to discourage proprietary data dead ends by encouraging normalized and rationalized data schemas that describe the environment using a common or, at least, compatible lexicon.  This reduces the inefficiency derived from opaqueness as it relates to software and data.

c.  Institute policy and incentives on the data consumer end to leverage the economies derived from the increased computing power from Moore’s Law by scaling data to construct interrelated datasets across multiple domains that will provide a more cohesive and expansive view of project performance.  This involves the warehousing of data into a common repository or reduced set of repositories.  The goal is to satisfy multiple project stakeholders from multiple domains using as few streams as necessary and encourage KDD (Knowledge Discovery in Databases).  This reduces the inefficiency derived from data opaqueness, but also from the traditional line-and-staff organization that has tended to stovepipe expertise and information.

d.  Institute acquisition and market incentives that encourage software manufacturers to engage in positive signalling behavior that reduces the opaqueness of the solutions being offered to the marketplace.

In summary, the current state of project data is one that is characterized by “best-of-breed” patchwork quilt solutions that tend to increase direct labor, reduces and limits productivity, and drives up cost.  At the end of the day the ability of the project to handle risk and adapt to technical challenges rests on the reliability and efficiency of its information systems.  A patchwork system fails to meet the needs of the organization as a whole and at the end of the day is not “takin’ care of business.”

Technical Foul — It’s Time for TPI in EVM

For more than 40 years the discipline of earned value management (EVM) has gone through a number of changes in its descriptions, governance, and procedures.  During that same time its community has been resistant to improvements in its methodology or to changes that extend its value when taking into account other methods that either augment its usefulness, or that potentially provide more utility in the area of performance management.  This has been especially the case where it is suggested that EVM is just one of many methodologies that contribute to this assessment under a more holistic approach.

Instead, it has been asserted that EVM is the basis for integrated project management.  (I disagree–and solely on the evidence that if it was so, then project managers would more fully participate in its organizations and conferences.  This would then pose the problem that PMs might then propose changes to EVM that, well…default to the second sentence in this post).  As evidence it need only be mentioned that there has been resistance to such recent developments in using earned schedule, technical performance, and risk–most especially risk based on Bayesian analysis).

Some of this resistance is understandable.  First, it took quite a long time just to get to a consensus on the application of EVM, though its principles and methods are based on simple and well proven statistical methods.  Second, the industries in which EVM has been accepted are sensitive to risk, and so a bureaucracy of practitioners have grown to ensure both consensus and compliance to accepted methods.  Third, the community that makes up practitioners of EVM consist mostly of cost analysts, trained in simple accounting, arithmetic, and statistical methodology.  It is thus a normal human bias to assume that the path of one’s previous success is the way to future success, though our understanding of the design space (reality) that we inhabit has been enhanced through new knowledge.  Fourth, there is a lot of data that applies to project management, and the EVM community is only now learning of the ways that this other data impacts our understanding of measuring project performance and the probability of reaching project goals in rolling out a product.  Finally, there is the less defensible reason that a lot of people and firms have built their careers that depends on maintaining the status quo.

Our ability to integrate disparate datasets is accelerating on a yearly basis thanks to digital technology, and the day in achieving integration of all relevant factors in project and enterprise performance is inevitable.  To be frank, I am personally engaged in such projects and am assisting organizations in moving in this direction today.  Regardless, we can make an advance in the discipline of performance management by pulling down low hanging fruit.  The most reachable one, in my opinion, is technical performance measurement.

The literature of technical performance has come quite a long way, thanks largely to the work of the Institute for Defense Analyses (IDA) and others, particularly the National Defense Industrial Association through the publication of their predictive measures guide.  This has been a topic of interest to me since its study was part of my duties back when I was still wearing a uniform.  The early results of these studies resulted in a paper that proposed a method of integrating technical performance, earned value, and risk.  A pretty comprehensive overview of the literature and guidance for technical performance can be found at this presentation by Glen Alleman and Tom Coonce given at EVM World in 2015.  It must be mentioned that Rick Price of Lockheed Martin also contributed greatly to this literature.

Keep in mind what is meant when we decide to assess technical performance within the context of R&D.  It is an assessment against expected or specified:

a.  Measures of Effectiveness (MoE)

b.  Measures of Performance (MoP), and

c.  Key Performance Parameters (KPP)

The opposition from the project management community to widespread application of this methodology took two forms.  First, it was argued, the method used to adjust the value of earned (CPI) seemed always to have a negative impact.  Second, there are technical performance factors that transcend the WBS, and so it is hard to properly adjust the individual control accounts based on the contribution of technical performance.  Third, some performance measures defy an assessment of value in a time-phased manner.  The most common example has been tracking weight of aircraft, which has contributors from virtually all components that go into it.

Let’s take these in order.  But lest one think that this perspective is an artifact from 1997, just a short while ago, in the A&D community, the EVM policy office at DoD attempted to apply a somewhat modest proposal of ensuring that technical performance was included as an element in EVM reporting.  Note that the EIA 748 standard states this clearly and has done so for quite some time.  Regardless, the same three core objections were raised in comments from the industry.  Thus, this caused me to ask some further in-depth questions and my revised perspective follows below.

The first condition occurred, in many cases, due to optimism bias in registering earned value, which often occurs when using a single point estimate of percent complete by a limited population of experts contributing to an assessment of the element.  Fair enough, but as you can imagine, its not a message that a PM wants to hear or will necessarily accept or admit, regardless of the merits.  There are more than enough pathways to second guessing and testing selection bias at other levels of reporting.  Glen Alleman in his Herding Cats blog post of 12 August has a very good post listing the systemic reasons for program failure.

Another factor is that the initial methodology did possess a skewing toward more pessimistic results.  This was not entirely apparent at the time because the statistical methods applied did not make that clear.  But, to critique that first proposal, which was the result of contributions from IDA and other systems engineering technical experts, the 10-50-90 method in assessing probability along the bandwidth of the technical performance baseline was too inflexible.  The graphic that we proposed is as follows and one can see that, while it was “good enough”, if rolled up there could be some bias that required adjustment.

TPM Graphic

 

Note that this range around 50% can be interpreted to be equivalent to the bandwidth found in the presentation given by Alleman and Coonce (as well as the Predictive Measures Guide), though the intent here was to perform an assessment based on a simplified means of handicapping the handicappers–or more accurately, performing a probabilistic assessment on expert opinion.  The method of performing Bayesian analysis to achieve this had not yet matured for such applications, and so we proposed a method that would provide a simple method that our practitioners could understand that still met the criteria of being a valid approach.  The reason for the difference in the graphic resides in the fact that the original assessment did not view this time-phasing as a continuous process, but rather an assessment at critical points along the technical baseline.

From a practical perspective, however, the banding proposed by Alleman and Coonce take into account the noise that will be experienced during the life cycle of development, and so solves the slight skewing toward pessimism.  We’ll leave aside for the moment how we determine the bands and, thus, acceptable noise as we track along our technical baseline.

The second objection is valid only so far as any alignment of work-related indicators vary from project to project.  For example, some legs of the WBS tree go down nine levels and others go down five levels, based on the complexity of the work and the organizational breakdown structure (OBS).  Thus where we peg within each leg of the tree the control account (CA) and work package (WP) level becomes relative.  Do the schedule activities have a one-to-one relationship or many-to-one relationship with the WP level in all legs?  Or is the lowest level that the alignment can be made in certain legs at the CA level?

Given that planning begins with the contract spec and (ideally) proceed from IMP –> IMS –> WBS –> PMB in a continuity, then we will be able to determine the contributions of TPM to each WBS element at their appropriate level.

This then leads us to another objection, which is that not all organizations bother with developing an IMP.  That is a topic for another day, but whether such an artifact is created formally or not, one must achieve in practice the purpose of the IMP in order to get from contract spec to IMS under a sufficiently complex effort to warrant CPM scheduling and EVM.

The third objection is really a child of the second objection.  There very well may be TPMs, such as weight, with so many contributors that distributing the impact would both dilute the visibility of the TPM and present a level of arbitrariness in distribution that would render its tracking useless.  (Note that I am not saying that the impact cannot be distributed because, given modern software applications, this can easily be done in an automated fashion after configuration.  My concern is in regard to visibility on a TPM that could render the program a failure).  In these cases, as with other indicators that must be tracked, there will be high level programmatic or contract level TPMs.

So where do we go from here?  Alleman and Coonce suggest adjusting the formula for BCWP, where P is informed by technical risk.  The predictive measures guide takes a similar approach and emphasizes the systems engineering (SE) domain in getting to an assessment to determine the impact of reported EVM element performance.  The recommendation of the 1997 project that I headed in assignments across Navy and OSD, was to inform performance based on a risk assessment of probable achievement at each discrete performance milestone.  What all of these studies have in common, and in common with common industry practice using SE principles, is an intermediate assessment, informed by risk, of a technical performance index against a technical performance baseline.

So let’s explore this part of the equation more fully.

Given that we have MoE, MoP, and KPP are identified for the project, different methods of determining progress apply.  This can be a very simplistic set of TPMs that, through the acquisition or fabrication of compliant materials, meet contractual requirements.  These are contract level TPMs.  Depending on contract type, achievement of these KPPs may result in either financial penalties or financial reward.  Then there are the R&D-dependent MoEs, MoPs, and KPPs that require more discrete time-phasing and ties to the physical completion of work documented by through the WBS structure.  As with EVM on the measurement of the value of work, our index of physical technical achievement can be determined through various methods: current EVM methods, simulated Monte Carlo technical risk, 10-50-90 risk assessment, Bayesian analysis, etc.  All of these methods are designed to militate against selection bias and the inherent limitations of limited sample size and, hence, extreme subjectivity.  Still, expert opinion is a valid method of assessment and (in cases where it works) better than a WAG or coin flip.

Taken together these TPMs can be used to determine the technical achievement of the project or program over time, with a financial assessment of the future work needed to bring it in line.  These elements can be weighted, as suggested by Coonce, Alleman, and Price, through an assessment of relative risk to project success.  Some of these TPIs will apply to particular WBS elements at various levels (since their efforts are tied to specific activities and schedules via the IMS), and the most important project and program-level TPMs are reflected at that level.

What about double counting?  A comparison of the aggregate TPIs and the aggregate CPI and SPI will determine the fidelity of the WBS to technical achievement.  Furthermore, a proper baseline review will ensure that double counting doesn’t occur.  If the element can be accounted for within the reported EVM elements, then it need not be tracked separately by a TPI.  Only those TPMs that cannot be distributed or that represent such overarching risk to project success need be tracked separately, with an overall project assessment made against MR or any reprogramming budget available that can bring the project back into spec.

My last post on project management concerned the practices at what was called Google X.  There incentives are given to teams that identify an unacceptably high level of technical risk that will fail to pay off within the anticipated planning horizon.  If the A&D and DoD community is to become more nimble in R&D, it needs the necessary tools to apply such long established concepts such as Cost-As-An-Independent-Variable (CAIV), and Agile methods (without falling into the bottomless pit of unsupported assertions by the cult such as elimination of estimating and performance tracking).

Even with EVM, the project and program management community needs a feel for where their major programmatic efforts are in terms of delivery and deployment, in looking at the entire logistics and life cycle system.  The TPI can be the logic check of whether to push ahead, or finishing the low risk items that are remaining in R&D to move to first item delivery, or to take the lessons learned from the effort, terminate the project, and incorporate those elements into the next generation project or related components or systems.  This aligns with the concept of project alignment with framing assumptions as an early indicator of continued project investment at the corporate level.

No doubt, existing information systems, many built using 1990s technology and limited to line-and-staff functionality, do not provide the ability to do this today.  Of course, these same systems do not take into account a whole plethora of essential information regarding contract and financial management: from the tracking of CLINs/SLINs, to work authorization and change order processing, to the flow of funding from TAB to PMB/MR and from PMB to CA/UB/PP, contract incentive threshold planning, and the list can go on.  What this argues for is innovation and rewarding those technology solutions that take a more holistic approach to project management within its domain as a subset of program, contract, and corporate management–and such solutions that do so without some esoteric promise of results at some point in the future after millions of dollars of consulting, design, and coding.  The first company or organization that does this will reap the rewards of doing so.

Furthermore, visibility equals action.  Diluting essential TPMs within an overarching set of performance metrics may have the effect of hiding them and failing to properly identify, classify, and handle risk.  Including TPI as an element at the appropriate level will provide necessary visibility to get to the meat of those elements that directly impact programmatic framing assumptions.

The Revolution Will Not Be Televised — The Sustainability Manifesto for Projects

While doing stuff and living life (which seems to take me away from writing) there were a good many interesting things written on project management.  The very insightful Dave Gordon at his blog, The Practicing IT Project Manager, provides a useful weekly list of the latest contributions to the literature that are of note.  If you haven’t checked it out please do so–I recommend it highly.

While I was away Dave posted to an interesting link on the concept of sustainability in project management.  Along those lines three PM professionals have proposed a Sustainability Manifesto for Projects.  As Dave points out in his own post on the topic, it rests on three basic principles:

  • Benefits realization over metrics limited to time, scope, and cost
  • Value for many over value of money
  • The long-term impact of our projects over their immediate results

These are worthy goals and no one needs to have me rain on their parade.  I would like to see these ethical principles, which is what they really are, incorporated into how we all conduct ourselves in business.  But then there is reality–the “is” over the “ought.”

For example, Dave and I have had some correspondence regarding the nature of the marketplace in which we operate through this blog.  Some time ago I wrote a series of posts here, here, and here providing an analysis of the markets in which we operate both in macroeconomic and microeconomic terms.

This came in response to one my colleagues making the counterfactual assertion that we operate in a “free market” based on the concept of “private enterprise.”  Apparently, such just-so stories are lies we have to tell ourselves to make the hypocrisy of daily life bearable.  But, to bring the point home, in talking about the concept of sustainability, what concrete measures will the authors of the manifesto bring to the table to counter the financialization of American business that has occurred of the past 35 years?

For example, the news lately has been replete with stories of companies moving plants from the United States to Mexico.  This despite rising and record corporate profits during a period of stagnating median working class incomes.  Free trade and globalization have been cited as the cause, but this involves more hand waving and the invocation of mantras, rather than analysis.  There has also been the predictable invocations of the Ayn Randian cult and the pseudoscience* of Social Darwinism.  Those on the opposite side of the debate characterize things as a morality play, with the public good versus greed being the main issue.  All of these explanations miss their mark, some more than others.

An article setting aside a few myths was recently published by Jonathan Rothwell at Brookings, which came to me via Mark Thoma’s blog, in the article, “Make elites compete: Why the 1% earn so much and what to do about it”.  Rothwell looks at the relative gains of the market over the last 40 years and finds that corporate profits, while doing well, have not been the driver of inequality that Robert Reich and other economists would have it be.  In looking at another myth that has been promulgated by Greg Mankiw, he finds that the rewards of one’s labors is not related to any special intelligence or skill.  On the contrary, one’s entry into the 1% is actually related to what industry one chooses to enter, regardless of all other factors.  This disparity is known as a “pay premium”.  As expected, petroleum and coal products, financial instruments, financial institutions, and lawyers, are at the top of the pay premium.  What is not, against all expectations of popular culture and popular economic writing, is the IT industry–hardware, software, etc.  Though they are the poster children of new technology, Bill Gates, Mark Zuckerburg, and others are the exception to the rule in an industry that is marked by a 90% failure rate.  Our most educated and talented people–those in science, engineering, the arts, and academia–are poorly paid–with negative pay premiums associated with their vocations.

The financialization of the economy is not a new or unnoticed phenomenon.  Kevin Phillips, in Wealth and Democracy, which was written in 2003, noted this trend.  There have been others.  What has not happened as a result is a national discussion on what to do about it, particularly in defining the term “sustainability”.

For those of us who have worked in the acquisition community, the practical impact of financialization and de-industrialization have made logistics challenging to say the least.  As a young contract negotiator and Navy Contracting Officer, I was challenged to support the fleet when any kind of fabrication or production was involved, especially in non-stocked machined spares of any significant complexity or size.  Oftentimes my search would find that the company that manufactured the items was out of business, its pieces sold off during Chapter 11, and most of the production work for those items still available done seasonally out of country.  My “out” at the time–during the height of the Cold War–was to take the technical specs, which were paid for and therefore owned by the government, to one of the Navy industrial activities for fabrication and production.  The skillset for such work was still fairly widespread, supported by the quality control provided by a fairly well-unionized and trade-based workforce–especially among machinists and other skilled workers.

Given the new and unique ways judges and lawyers have applied privatized IP law to items financed by the public, such opportunities to support our public institutions and infrastructure, as I was able, have been largely closed out.  Furthermore, the places to send such work, where possible, have also gotten vanishingly smaller.  Perhaps digital printing will be the savior for manufacturing that it is touted to be.  What it will not do is stitch back the social fabric that has been ripped apart in communities hollowed out by the loss of their economic base, which, when replaced, comes with lowered expectations and quality of life–and often shortened lives.

In the end, though, such “fixes” benefit a shrinkingly few individuals at the expense of the democratic enterprise.  Capitalism did not exist when the country was formed, despite the assertion of polemicists to link the economic system to our democratic government.  Smith did not write his pre-modern scientific tract until 1776, and much of what it meant was years off into the future, and its relevance given what we’ve learned over the last 240 years about human nature and our world is up for debate.  What was not part of such a discussion back then–and would not have been understood–was the concept of sustainability.  Sustainability in the study of healthy ecosystems usually involves the maintenance of great diversity and the flourishing of life that denotes health.  This is science.  Economics, despite Keynes and others, is still largely rooted in 18th and 19th century pseudoscience.

I know of no fix or commitment to a sustainability manifesto that includes global, environmental, and social sustainability that makes this possible short of a major intellectual, social or political movement willing to make a long-term commitment to incremental, achievable goals toward that ultimate end.  Otherwise it’s just the mental equivalent to camping out in Zuccotti Park.  The anger we note around us during this election year of 2016 (our year of discontent) is a natural human reaction to the end of an idea, which has outlived its explanatory power and, therefore, its usefulness.  Which way shall we lurch?

The Sustainability Manifesto for Projects, then, is a modest proposal.  It may also simply be a sign of the times, albeit a rational one.  As such, it leaves open a lot of questions, and most of these questions cannot be addressed or determined by the people to which it is targeted: project managers, who are usually simply employees of a larger enterprise.  People behave as they are treated–to the incentives and disincentives presented to them, oftentimes not completely apparent on the conscious level.  Thus, I’m not sure if this manifesto hits its mark or even the right one.

*This term is often misunderstood by non-scientists.  Pseudoscience means non-science, just as alternative medicine means non-medicine.  If any of the various hypotheses of pseudoscience are found true, given proper vetting and methodology, that proposition would simply be called science.  Just as alternative methods of treatment, if found effective and consistent, given proper controls, would simply be called medicine.

Walk This Way — DoD IG Reviews DCMA Contracting Officer Business Systems Deficiencies

The sufficiency and effectiveness of business systems is an essential element in the project management ecosystem.  Far beyond performance measurement of the actual effort, the sufficiency of the business systems to support the effort are essential in its success.  If the systems in place do not properly track and record the transactions behind the work being performed, the credibility of the data is called into question.  Furthermore, support and logistical systems, such as procurement, supply, and material management, contribute in a very real way, to work accomplishment.  If that spare part isn’t in-house on time, the work stops.

In catching up on reading this month, I found that the DoD Inspector General issued a report on October 1 showing that of 21 audits demonstrating business system deficiencies, contracting officer timeliness in meeting DFARS deadlines at various milestones existed in every case.  For example, in 17 of those cases Contracting Officers did not issue final determination letters within 30 days of the report as required by the DFARS.  In eight cases required withholds were not assessed.

For those of you who are unfamiliar with the six business systems assessed under DoD contractor project management, they consist of accounting, estimating, material management, purchasing, earned value management, and government property.  The greater the credibility and fidelity of these systems, the greater level of confidence that the government can have in ensuring that the data received in reporting on execution of public funds under these contracts.

To a certain extent the deadlines under the DFARS are so tightly scheduled that they fail to take into account normal delays in operations.  Forbid that the Contracting Officer may be on leave when the audit is received or is engaged in other detailed negotiations.  In recent years the contracting specialty within the government, like government in general, has been seriously understaffed, underfunded, and unsupported.  Given that oftentimes the best and the brightest soon leave government service for greener pastures in the private sector, what is often left are inexperienced and overworked (though mostly dedicated) personnel who do not have the skills or the time to engage in systems thinking in approaching noted deficiencies in these systems.

This pressure for staff reduction, even in areas that have been decimated by austerity politics, is significant.  In the report I could not help but shake my head when an Excel spreadsheet was identified as the “Contractor Business System Determination Timeline Tracking Tool.”  This reminds me of my initial assignment as a young Navy officer and my first assignment as a contract negotiator where I also performed collateral duties in building simple automated tools.  (This led to me being assigned later as the program manager of the first Navy contract and purchase order management system.) That very first system that I built, however, was tracking contract milestone deadlines.  It was done in VisiCalc and the year was 1984.

That a major procurement agency of the U.S. Department of Defense is still using a simple and ineffective spreadsheet tracking “tool” more than 30 years after my own experience is both depressing and alarming.  There is a long and winding history on why they would find themselves in this condition, but some additional training, which was the agency’s response to the IG, is not going to solve the problem.  In fact, such an approach is so ineffective it’s not even a Band-Aid.  It’s a bureaucratic function of answering the mail.

The reason why it won’t solve the problem is because there is no magic wand to get those additional contract negotiators and contracting officers in place.  The large intern program of recruiting young people from colleges to grow talent and provide people with a promising career track is long gone.  Interdisciplinary and cross-domain expertise required in today’s world to reflect the new realities when procuring products and services are not in the works.  In places where they are being attempted, outmoded personnel classification systems based on older concepts of division of labor stand in the way.

The list of systemic causes could go on, but in the end it’s not in the DCMA response because no one cares, and if they do care, they can’t do anything about it.  It’s not as if “BEST TALENT LEAVES DUE TO PUBLIC HOSTILITY TO PUBLIC SERVICE”  was a headline of any significance.  The Post under Bezos is not going to run that one anytime soon, though we’ve been living under it since 1981.  The old “thank you for your service” line for veterans has become a joke.  Those who use this line might as well say what that really means, which is: “I’m glad it was you and not me.”

The only realistic way to augment an organization in this state in order the break the cycle is to automate the system–and to do it in a way as to tie together the entire system.  When I run into my consulting friends and colleagues and they repeat the mantra: “software doesn’t matter, it’s all based on systems” I can only shake my head.  I have learned to be more tactful.

In today’s world software matters.  Try doing today what we used to do with slide rules, scientific calculators, and process charts absent software.  Compare organizations that use the old division-of-labor, “best of breed” tool concept against those who have integrated their systems and use data across domains effectively.  Now tell me again why “software doesn’t matter.”  Not only does it matter but “software” isn’t all the same.  Some “software” consists of individual apps that do one thing.  Some “software” is designed to address enterprise challenges.  Some “software” is designed not only to enterprise challenges, but also to address the maximization of value in enterprise data.

In the case of procurement and business systems assessment, the only path forward for the agency will be to apply data-driven measures to the underlying systems and tie those assessments into a systemic solution that includes the contracting officers, negotiators, administrators, contracting officer representatives, the auditors, analysts, and management.  One can see, just in writing one line, how much more complex are the requirements for the automated panacea to replace “Contractor Business System Determination Timeline Tracking Tool.”  Is there any question why the “tool” is ineffective?

If this were the 1990s, though the practice still persists, we would sit down, perform systems analysis, outline the systems and subsystem solutions, and then through various stages of project management, design the software system to reflect the actual system in place as if organizational change did not exist.  This is the process that has a 90% failure rate across government and industry.  The level of denial to this figure is so great that I run into IT managers and CIOs every day that fail to know it or, if they do, believe that it will apply to them–and these are brilliant people.  It is selection bias and optimism, with a little (or a lot) of narcissism, run amok.  The physics and math on this are so well documented that you might as well take your organization’s money and go to Vegas with it.  Your local bookie could give you better odds.

The key is risk handling (not the weasel word “management,” not “mitigation” since some risks must simply be accepted, and certainly not the unrealistic term “avoidance”), and the deployment of technology that provides at least a partial solution to the entire problem, augmented by incremental changes to incorporate each system into the overall solution. For example, DeLong and Froomkin’s seminal paper on what they called “The Next Economy” holds true today.  The lack of transparency in software technologies requires a process whereby the market is surveyed, vendors must go through a series of assessments and demonstration tests, and where the selected technology then goes through stage gates: proof-of-concept, pilot, and, eventually deployment.  Success at each level gets rewarded with proceeding to the next step.

Thus, ideally the process includes introducing into the underlying functionality the specific functionality required by the organization through Agile processes where releasable versions of the solution are delivered at the end of each sprint.  One need not be an Agile Cultist to do this.  In my previous post I referred to Neil Killick’s simple checklist for whether you are engaged in Agile.  It is the best and most succinct distillation of both the process and value inherent in Agile that I have found to date, with all of the “woo-woo” taken out.  For an agency as Byzantine as DCMA, this is really the only realistic and effective approach.

DCMA is an essential agency in DoD acquisition management, but it cannot do what it once did under a more favorable funding environment.  To be frank, it didn’t even do its job all that well when a more favorable condition was in place, though things were better.  But this is also a factor in why it finds itself in its current state.  It was punished for its transgressions, perhaps too much.  Several waves of personnel cuts, staff reductions, and domain and corporate knowledge loss on top of the general trend has created an agency in a condition of siege.  As with any organization under siege, backbiting and careerism for those few remaining is rewarded.  Iconoclasts and thought leaders stay for a while before being driven away.  They are seen as being too risky.

This does not create a condition for an agency ready to accept or quickly execute change through new technology.  What it does do is allow portions of the agency to engage in cargo cult change management.  That is, it has the appearance of change but keeps self-interest comfortable and change in its place.  Over time–several years–with the few remaining resources committed to this process, they will work the “change.”  Eventually, they may even get something tangible, though suboptimized to conform to rice bowls; preferably after management has their retirement plans secured.

Still, the reality is that DCMA must be made to do it’s job because it is in the best interests of the U.S. Department of Defense.  The panacea will not be found through “collaboration” with industry, which consists of the companies which DCMA is tasked with overseeing and regulating.  We all know how well deregulation and collaboration has worked in the financial derivatives, banking, mortgage, and stock markets.  Nor will it come from organic efforts within an understaffed and under-resourced agency that will be unable to leverage the best and latest technology solutions under the unforgiving math of organic IT failure rates.  Nor will deploying the long outmoded approach of deploying suboptimized “tools” to address a particular problem.  The proper solution is to leverage effective COTS solutions that facilitate the challenge of systems integration and thinking.

 

 

I Heard It Through the Grapevine — Self Certification of Business Systems

Despite the best of intentions web blogging this week has been sparse, my time filled with contract negotiations and responses to solicitations.  Most recently on my radar is the latest proposed DFARS rule to allow contractors to self-certify their business systems.  Paul Cederwall at Pacific Northwest Government Contracting Update blog has has a lot to say about the rule that is interesting but he gets some important things wrong.

To provide a little background, a DFARS requirement that has been in place since May 18, 2011 established six business systems that must demonstrate accountability and traceability in their internal systems to ensure that there is a high degree of confidence in the integrity of the underlying systems of the contractor receiving award of a government contract.  You can find the language here.  Given that this is the taxpayer’s money, while there was a lot of fear and loathing on how the rule would be applied since it included some teeth–the threat of a withhold on payments–most individuals involved in acquisition reform welcomed it as a means of handling risk given that one of the elements of making an award is “responsibility.”  (This is one leg of the “three-legged stool test” that must be passed prior to a contracting officer making an award, the others being responsiveness, and price and price-related factors.  This last could include value determinations.)

The concept of responsibility is a loaded one, calling on the contracting officer to apply judgment, business knowledge and acumen, and analytical knowledge.  The elements, from the Corporate Findlaw site has a very good summary as follows:

“the FAR requires a prospective contractor to (1) have adequate financial resources to perform the contract; (2) be able to comply with the required or proposed delivery or performance schedule; (3) have a satisfactory performance record; (4) have a satisfactory record of integrity and business ethics; (5) have the necessary organization, experience, accounting and operational controls, and technical skills; (6) have the necessary production, construction, and technical equipment and facilities; and (7) be otherwise qualified and eligible to receive an award under applicable laws and regulations.”

Our acquisition systems, especially in regard to extremely large contracts that will turn into the complex projects that I write about here, tend to be pulled in many directions.  The customer, for example, wants what they need and to reduce the procurement lead time as much as possible.  Those who are given oversight responsibility and concern themselves with financial accountability focus on the need for compliance and integrity in the system, and to ensure that funds are being expended for the purpose contracted and in a manner that will lead to the contractually mandated outcome.  The contractors within the competitive range not only bid to win but their proposals are calibrated to take into account considerations of risk, market share and exposure, strategic positioning, and margin.

Thus, the Six Business Systems rule is a way of meeting the legal requirement of determining responsibility, which is part of the contracting officer’s charter, particularly under the real-world conditions imposed by governmental austerity.  But here is the rub.  When I was an active duty Navy contracting officer we had a great deal of resources at our disposal to ensure that we had done our due diligence prior to award.  The military services and the Department of Defense provided auditing resources to ensure the integrity of financial systems, expose rates during the negotiating process to meet the standard of “fair and reasonable,” and to ensure contract compliance and establish reliable reporting of progress based on those audits.

But things have changed and not always for the better.  During the 1980s and after technology was the first agent for change.  As a matter of fact I was the second project manager of the Navy Procurement System project in San Diego during that time and so was there at the beginning.  The people around me were prescient–despite the remonstrations to the contrary–that such digitization of procurement processes would result not only in improvements in the quality of information and productivity, but also reductions in workforce.  The result was that the federal government lost a great deal of corporate knowledge and wisdom while attempting to weed out suspected Luddites.  Hand-in-hand with this technological development came the rise of government austerity, which has become more, not less, severe over the last thirty years.  Thus the public lost more corporate knowledge and wisdom in the areas most sensitive to such losses.

Over this time criticism of the procurement system has seemed like the easiest horse of convenience to beat, especially in the environment of Washington, D.C.  The contracting officer pool is largely inexperienced.  The most experienced, if they last, are largely overworked, which diminishes effectiveness.  New hires are few and far between, especially given hiring and pay freezes.  Internships and mentoring programs that used to compete with the best of private industry have largely disappeared and most training budgets are either non-existent or bare-boned.  The expected procurement “scandals,” the overwhelming majority of which can be directly traced to the conditions described above as opposed to corruption, fraud, waste, or abuse, resulted.

Because of these conditions, the reaction in terms of ensuring integrity within the systems in lieu of finding scapegoats, was to first establish the Business Systems rule, which is in the best tradition of management.  But, given that things became unexpectedly more austere with government shutdowns and sequestration, the agency tasked with enforcing the rule–the Defense Contract Audit Agency (DCAA)–does not have the resources to complete a full review of the systems of the significant number of contractors that provide supplies and services to the U.S. Department of Defense.  Thus, the latest solution was to propose self-certification–one which was also sought by a good many companies in the industry.

There are criticisms coming from two different perspectives on the rule.  The first is that self-certification is charging the fox with watching the hen house.  The 2006-07 housing bubble and resulting banking crisis is an object lesson of insufficient oversight.

The other criticism comes from many in the industry that sought the change.  The rub here is that teeth were imposed in the process, requiring an annual independent CPA audit.  DCAA will review the results of the audit and the methodology used to make the determination of the certification.  This is where I part with PNWC.  The knee-jerk reaction is to question DCAA’s ability to judge whether the audit was completed properly because, after all, they were not “competent” to complete the audits to begin with.  This is a tautology and not a very good one.

As a leader and manager, if I delegate a task (given that I am usually busy on more pressing issues) and put checks and balances in place in the performance of that task, there will still come the time when I want that individual (or individuals) to present me with an accounting of what they did in the performance of that task.  This is called leadership and management.

The legal responsibility of DCAA in this case in their oversight role is to ensure the integrity of the contractor’s systems so that contracting officers can make awards with confidence to responsible firms.  DCAA is also accountable for the judgment and process in providing that certification.  One can delegate responsibility in the completion of a task but one cannot delegate accountability.

 

Note:  Some formatting errors came out in the initial posting.  Many apologies.

Wild Horses — Horsetrading and Positive Variances in Program Management

Last few weeks was making my rounds and received some good feedback on blog posts found here.  One of them had to do with contract harvesting and what it means.  I had posted a few scenarios and offered my opinion on how they are or aren’t part of normal project management.  What I didn’t do was point out an obvious example: old fashioned horsetrading.

A few words on policy for this blog before proceeding in order to make two points.  I work with people in an industry where consensus building and tact is of great importance.  There is a very good reason for this.  Everyone–as part of the saying goes–has an opinion, but I believe that opinions are only of value if backed up with observations, facts, and supportable conclusions.  I throw out a good deal of conclusions and opinion here based on facts that I assume the reader is familiar with or may become familiar with as a result of reading this blog.  The facts I cite are just those–they are taken from public sources and I provide links for the more esoteric ones.  The conclusions and opinions derived from them are my own, in my own words–and they are contingent.  That is, the opinions expressed are those based on experience and empiricist methods, but given new information I am always open to changing my opinion.  Sometimes I run something up the flagpole just to see if anyone salutes.  That is the first point.  The second is that I cite my sources with links (where links exist) to distinguish the work of others from my own, but there are times when I must cite an observation or opinion of someone else on a non-attribution basis.  As with any journalistic enterprise–though this is a blog–I have my sources and sometimes in order to work effectively those sources provide information or opinions that I use to inform my conclusions and opinions, but who need to remain anonymous in order to work effectively in their positions.  Sometimes it’s just an off-hand comment that is of little importance to the utterer, but that sparks some issue in my own mind.  There are politics in all kinds of places and free speech is not entirely safe in the workplace when contrary to an official policy.  So i respect a non-attribution policy.  I also firewall off information from my own commercial activities from what I write in this blog.  Only items publicly discussed are found here.  This is not a gossip column.

Okay, now that we’ve gotten that out of the way, we can get back to the topic at hand.  What if you have a program that has a number of positive variances, that is, where your performance shows that you are ahead of schedule and under cost.  But there is an area of risk and/or opportunity where those resources can better be applied?  What is wrong with negotiating a horse trade?  That is, we’ll take allocated resources from A, B, and C and apply it to X, Y, and Z.  How do we handle those cases and do I imply that it is wrong to do so?

In the earlier post I posited that taking resources from one area in a project and applying them elsewhere constitutes traditional project replanning.  My understanding is that some organizations forbid this type of horsetrading but it seems clear that it is well within the judgement of the project manager and contracting authority.

Mo’Better Risk — Tournaments and Games of Failure Part II

My last post discussed economic tournaments and games of failure in how they describe the success and failure of companies, with an comic example for IT start-up companies.  Glen Alleman at his Herding Cats blog has a more serious response in handily rebutting those who believe that #NoEstimates, Lean, Agile, and other cult-like fads can overcome the bottom line, that is, apply a method to reduce inherent risk and drive success.  As Glen writes:

“It’s about the money. It’s always about the money. Many want it to be about them or their colleagues, or the work environment, or the learning opportunities, or the self actualization.” — Glen Alleman, Herding Cats

Perfectly good products and companies fail all the time.  Oftentimes the best products fail to win the market, or do so only fleetingly.  Just think of the roles of the dead (or walking dead) over the years:  Novell, WordPerfect, Visicalc, Harvard Graphics; the list can go on and on.  Thus, one point that I would deviate from Glen is that it is not always EBITDA.  If that were true then both Facebook and Amazon would not be around today.  We see tremendous payouts to companies with promising technologies acquired for outrageous sums of money, though they have yet to make a profit.  But for every one of these there are many others that see the light of day for a moment and then flicker out of existence

So what is going on and how does this inform our knowledge of project management?  For the measure of our success is time and money, in most cases.  Obviously not all cases.  I’ve given two cases of success that appeared to be failure in previous posts to this blog: the M1A1 Tank and the ACA.  The reason why these “failures” were misdiagnosed was that the agreed measure(s) of success were incorrect.  Knowing this difference, where, and how it applies is important.

So how do tournaments and games of failure play a role in project management?  I submit that the lesson learned from these observations is that we see certain types of behaviors that are encouraged that tend to “bake” certain risks into our projects.  In high tech we know that there will be a thousand failures for every success, but it is important to keep the players playing–at least it is in the interest of the acquiring organization to do so, and is in the public interest in many cases as well.  We also know that most IT projects by most measures–both contracted out and organic–tend to realize a high rate of failure.  But if you win an important contract or secure an important project, the rewards can be significant.

The behaviors that are reinforced in this scenario on the part of the competing organization is to underestimate the cost and time involved in the effort; that is, so-called “bid to win.”  On the acquiring organization’s part, contracting officers lately have been all too happy to award contracts they know to be too low (and normally out of the competitive range) even though they realize it to be significantly below the independent estimate.  Thus “buying in” provides a significant risk that is hard to overcome.

Other behaviors that we see given the project ecosystem are the bias toward optimism and requirements instability.

In the first case, bias toward optimism, we often hear project and program managers dismiss bad news because it is “looking in the rear view mirror.”  We are “exploring,” we are told, and so the end state will not be dictated by history.  We often hear a version of this meme in cases where those in power wish to avoid accountability.  “Mistakes were made” and “we are focused on the future” are attempts to change the subject and avoid the reckoning that will come.  In most cases, however, particularly in project management, the motivations are not dishonest but, instead, sociological and psychological.  People who tend to build things–engineers in general, software coders, designers, etc.–tend to be an optimistic lot.  In very few cases will you find one of them who will refuse to take on a challenge.  How many cases have we presented a challenge to someone with these traits and heard the refrain:  “I can do that.”?  This form of self-delusion can be both an asset and a risk.  Who but an optimist would take on any technically challenging project?  But this is also the trait that will keep people working to the bitter end in a failure that places the entire enterprise at risk.

I have already spent some bits in previous posts regarding the instability of requirements, but this is part and parcel of the traits that we see within this framework.  Our end users determine that given how things are going we really need additional functionality, features, or improvements prior to the product roll out.  Our technical personnel will determine that for “just a bit more effort” they can achieve a higher level of performance or add capabilities at marginal or tradeoff cost.  In many cases, given the realization that the acquisition was a buy-in, project and program managers allow great latitude in accepting as a change an item that was assumed to be in the original scope.

There is a point where one or more of these factors is “baked in” into the course that the project will take.  We can delude ourselves into believing that we can change the course of the trajectory of the system through the application of methods: Agile, Lean, Six Sigma, PMBOK, etc. but, in the end, if we exhaust our resources without a road map on how to do this we will fail.  Our systems must be powerful and discrete enough to note the trend that is “baked in” due to factors in the structure and architecture of the effort being undertaken.  This is the core risk that must be managed in any undertaking.  A good example that applies to a complex topic like Global Warming was recently illustrated by Neil deGrasse Tyson in the series Cosmos:

In this example Dr. Tyson is climate and the dog is the weather.  But in our own analogy Dr. Tyson can be the trajectory of the system with the dog representing the “noise” of periodic indicators and activity around the effort.  We often spend a lot of time and effort (which I would argue is largely unproductive) on influencing these transient conditions in simpler systems rather than on the core inertia of the system itself.  That is where the risk lies. Thus, not all indicators are the same.  Some are measuring transient anomalies that have nothing to do with changing the core direction of the system, others are more valuable.  These latter indicators are the ones that we need to cultivate and develop, and they reside in an initial measurement of the inherent risk of the system largely based on its architecture that is antecedent to the start of the work.

This is not to say that we can do nothing about the trajectory.  A simpler system can be influenced more easily.  We cannot recover the effort already expended–which is why even historical indicators are important.  It is because they inform our future expectations and, if we pay attention to them, they keep us grounded in reality.  Even in the case of Global Warming we can change, though gradually, what will be a disastrous result if we allow things to continue on their present course.  In a deterministic universe we can influence the outcomes based on the contingent probabilities presented to us over time.  Thus, we will know if we have handled the core risk of the system by focusing on these better indicators as the effort progresses.  This will affect its trajectory.

Of course, a more direct way of modifying these risks is to make systemic adjustments.  Do we really need a tournament-based system as it exists and is the waste inherent in accepting so much failure really necessary?  What would that alternative look like?

We Gotta Get Out of This Place — Are Our Contracting Systems Agile Enough?

The question in the title refers to agile in the “traditional” sense and not the big “A” appropriated sense.  But I’ll talk about big “A” Agile also.

It also refers to a number of discussions I have been engaged in recently among some of the leading practitioners in the program and project management community. Here are few data points:

a.  GAO and other oversight agencies have been critical of changing requirements over the life cycle of a project, particularly in DoD and other federal agencies, that contribute to cost growth.  The defense of these changes has been that many of them were necessary in order to meet new circumstances.  Okay, sounds fair enough.

But to my way of thinking, if the change(s) were necessary to keep the project from being obsolete upon deployment of the system, or were to correct an emergent threat that would have undermined project success and its rationale, then by all means we need to course correct.  But if the changes were not to address either of those scenarios, but simply to improve the system at more than marginal cost, then it was unnecessary.

How can I make such a broad statement and what is the alternative? we may ask.  My rationale is that the change or changes, if representing a new development involving significant funding, should stand on its own merits, since it is essentially a new project.

All of us who have been involved in complex projects have seen cases where, as a result of development (and quite often failure), that oftentimes we discover new methods and technologies within the present scope that garner an advantage not previously anticipated.  This doesn’t happen as often as we’d like but it does happen.  In my own survey and project in development of a methodology for incorporating technical performance into project cost, schedule and risk assessments, it was found that failing a test, for example, had value since it allowed engineers to determine pathways for not only achieving the technical objective but, oftentimes, exceeding the parameter.  We find that for x% more in investment as a result of the development, test, milestone review, etc. that we can improve the performance of some aspect of the system.  In that case, if the cost or effort is marginal then, the improvement is part of the core development process within the original scope.  Limited internal replanning may be necessary to incorporate the change but the remainder of the project can largely go along as planned.

Alternatively, however, inserting new effort in the form of changes to major subsystems involves major restructuring of the project.  This disrupts the business rhythm of the project, causing a cultural shift within the project team to socialize the change, and to incorporate the new work.  Change of this type not only causes what is essentially a reboot of the project, but also tends to add risk to the project and program.  This new risk will manifest itself as cost risk initially, but given risk handling, will also manifest itself into technical and schedule risk.

The result of this decision, driven solely by what may seem to be urgent operational considerations, is to undermine project and program timeliness since there is a financial impact to these decisions.  Thus, when you increase risk to a program the reaction of the budget holder is to provide an incentive to the program manager to manage risk more closely.  This oftentimes will invite what, in D.C. parlance, is called a budget mark, but to the rest of us is called a budget cut.  When socialized within the project, such cuts usually are then taken out of management reserve or non-mandatory activities that were put in place as contingencies to handle overall program risk at inception.  The mark is usually equal to the amount of internal risk caused by the requirements change.  Thus, adding risk is punished, not rewarded, because money is finite and must be applied to projects and programs that demonstrate that they can execute the scope against the plan and expend the funds provided to them.  So the total scope (and thus cost) of the project will increase, but the flexibility within the budget base will decrease since all of that money is now committed to handle risk.  Unanticipated risk, therefore, may not be effectively handled in the future.

At first the application of a budget mark in this case may seem counterintuitive, and when I first went through the budget hearing process it certainly did to me.  That is until one realizes that at each level the budget holder must demonstrate that the funds are being used for their intended purpose.  There can be no “banking” of money since each project and program must compete for the dollars available at any one time–it’s not the PM’s money, he or she has use of that money to provide the intended system.  Unfortunately, piggybacking significant changes (and constructive changes) to the original scope is common in project management.  Customers want what they want and business wants that business.  (More on this below).  As a result, the quid pro quo is: you want this new thing?  okay, but you will now have to manage risk based on the introduction of new requirements.  Risk handling, then, will most often lead to increased duration.  This can and often does result in a non-virtuous spiral in which requirements changes lead to cost growth and project risk, which lead to budget marks that restrict overall project flexibility, which tend to lead to additional duration.  A project under these circumstances finds itself either pushed to the point of not being deployed, or being deployed many years after the system needed to be in place, at much greater overall cost than originally anticipated.

As an alternative, by making improvements stand on their own merits a proper cost-benefit analysis can be completed to determine if the improvement is timely and how it measures up against the latest alternative technologies available.  It becomes its own project and not a parasite feeding off of the main effort.  This is known as the iterative approach and those in software development know it very well: you determine the problem that needs to be solved, figure out the features and approach that provides the 80% solution, and work to get it done.  Improvements can come after version 1.0–coding is not a welfare program for developers as the Agile Cult would have it.  The ramifications for project and program managers is apparent: they must not only be aware of the operational and technical aspects of their efforts, but also know the financial impacts of their decisions and take those into account.  Failure to do so is a recipe for self-inflicted disaster.

This leads us to the next point.

b.  In the last 20+ years major projects have found that the time from initial development to production has increased several times.  For example, the poster child for this phenomenon in the military services is the F35 Lightning II jet fighter, also known as the Joint Strike Fighter (JSF), which will continue to be in development at least through 2019 and perhaps into 2021.  From program inception in 2001 to Initial Operational Capability (IOC) it will be 15 years, at least, before the program is ready to deploy and go to production.  This scenario is being played out across the board in both government and industry for large projects of all types with few exceptions.  In particular, software projects tend to either fail or to meet their operational goals in the overwhelming majority of cases.  This would suggest that, aside from the typical issues of configuration control, project stability, and rubber baselining, (aside from the self-reinforcing cost growth culture of the Agile Cult) that there are larger underlying causes involved than simply contracting systems, though they are probably a contributing factor.

From a hardware perspective in terms of military strategy there may be a very good reason why it doesn’t matter that certain systems are not deployed immediately.  That reason is that, once deployed, they are expensive to maintain logistically.  Logistics of deployed systems will compete for dollars that could be better spent in developing–but not deploying–new technologies.  The answer, of course, is somewhere in between.  You can’t use that notional jet fighter when you needed it half a world away yesterday.

c.  Where we can see the effects on behavior from an acquisition systems perspective is in the comparison of independent estimates to what is eventually negotiated.  For example, one military service recently gave the example of a program in which the confidential independent estimate was $2.1 billion.  The successful commercial contractor team, let’s call them Team A, whose proposal was deemed technically acceptable, made an offer at $1.2 billion while the unsuccessful contractor team, Team B, offered near the independent estimate.  Months later, thanks to constructive changes, the eventual cost of the contract will be at or slightly above the independent estimate based on an apples-to-apples comparison of the scope.  Thus it is apparent that Team A bought into the contract.  Apparently, honesty in proposal pricing isn’t always the best policy.

I have often been asked what the rationale could be for a contractor to “buy-in” particularly for such large programs involving so much money.  The answer, of course, is “it depends.”  Team A could have the technological lead in the systems being procured and were defending their territory, thus buying-in, even without constructive changes, was deemed to be worth the tradeoff.  Perhaps Team A was behind in the technologies involved and would use the contract as a means of financing their gap.  Team A could have an excess of personnel with technical skills that are complimentary to those needed for the effort but who are otherwise not employed within their core competency, so rather than lose them it was worth bidding at or near cost for the perceived effort.  These are, of course, the most charitable assumed rationales, though the ones that I have most often encountered.

The real question in this case would be how, even given the judgment of the technical assessment team, the contracting officer would keep a proposal so far below the independent estimate to fall within the competitive range?  If the government’s requirements are so vague that two experienced contracting teams can fall so far apart, it should be apparent that the solicitation either defective or the scope is not completely understood.

I think it is this question that leads us to the more interesting aspects of acquisition, program, and project management.  For one, I am certain that a large acquisition like the one described is highly visible and of import to the political system and elected officials.  In the face of such scrutiny it would have to be a procuring contacting officer (PCO) of great experience and internal fortitude, confident in their judgment, to reset the process after proposals had been received.

There is also pressure in contracting from influencers within the requiring organizations that are under pressure to deploy systems to meet their needs as expeditiously as possible–especially after a fairly lengthy set of activities that must occur prior to the issuance of a solicitation.  The development of a good set of requirements is a process that involves multiple stakeholders on highly technical issues is one that requires a great deal of coordination and development by a centralized authority.  Absent such guidance the method of approaching requirements can be defective from the start.  For example, does the requiring organization write a Statement of Work, a Performance Work Statement, or a Statement of Objectives?  Which is most appropriate contract type for the work being performed and the risk involved?  Should there be one overriding approach or a combination of approaches based on the subsystems that make up the entire system?

But even given all of these internal factors there are others that are unique to our own time.  I think it would be interesting to see how these factors have affected the conditions that everyone in our discipline deems to be problematic.  This includes the reduced diversity of the industrial and information verticals upon which the acquisition and logistics systems rely; the erosion of domestic sources of expertise, manufactured materials, and commodities; the underinvestment in training and personnel development and retention within government that undermines necessary expertise; specialization within the contracting profession that separates the stages of acquisition into stovepipes that undermines continuity and cohesiveness; the issuance of patent monopolies that stifle and restrict competition and innovation; and unproductive rent seeking behavior on the part of economic elites that undermine the effectiveness of R&D and production-centric companies.  Finally, this also includes those government policies instituted since the early 1980s that support these developments.

The importance of any of these cannot be understated but let’s take the issue of rent seeking that has caused the “financialization” of almost all aspects of economic life as it relates to what a contracting officer must face when acquiring systems.  Private sector R&D, which mostly fell in response to economic dislocations in the past–but in a downward trend since the late 1960s overall and especially since the mid 1980s–has fallen precipitously since the bursting of the housing bubble and resultant financial crisis in 2007 with no signs of recovery.  Sequestration and other austerity measures in FY 2015 will at the same time will also negatively impact public R&D, continuing the trend overall with no offset.  This fall in R&D has a direct impact on productivity and undercuts the effectiveness of using all of the tools at hand to find existing technologies to offset the ones that require full R&D.  This appears to have caused a rise in intrinsic risk in the economy as a whole for efforts of this type, and it is this underlying risk that we see at the micro and project management level.