Shake it Out – Embracing the Future in Program Management – Part One: Program and Project Management in the Public Interest

I heard the song from which I derived the title to this post sung by Florence and the Machine and was inspired to sit down and write about what I see as the future in program management.

Thus, my blogging radio silence has ended as I begin to process and share my observations and essential achievements over the last couple of years.

Some of my reticence in writing has been due to the continual drumbeat of both outrageous and polarizing speech that had dominated our lives for four years. Combined with the resulting societal polarization, I was overwhelmed by the hyper-politicized environment which has fostered disinformation and dysfunction. Those who wish to seek my first and current word on this subject need only visit my blog post, “In Defense of Empiricism” at the AITS Blogging Alliance here.

It is hard to believe that I published that post four years ago. I stand by it today and believe that it remains as valid, if not more so, than it did when I wrote and shared it.

Finally, the last and most important reason for my relative silence has been that I have been hard at work putting my money and reputation where my blogging fingers have been—in the face of a pandemic that has transformed and transfigured our social and economic lives.

My company—the conduit that provides the insights I share here—is SNA Software LLC. We are a small, veteran-owned company and we specialize in data capture, transformation, contextualization and visualization. We do it in a way that removes significant effort in these processes, ensures reliability and trust, to incorporate off-the-shelf functionality that provides insight, and empowers the user by leveraging the power of open systems, especially in program and project management.

Program and Project Management in the Public Interest

There are two aspects to the business world that we inhabit: commercial and government; both, however, usually relate to some aspect of the public interest, which is our forte.

There are also two concepts about this subject to unpack.

The first is distinguishing between program and project management. In this concept, a program is an overarching effort that may consist of individual efforts that, together, will result in the production or completion of a system, whether that is a weapons system, a satellite, a spacecraft, or an engine. It could even be a dam or some other aspect of public works.

A project under this concept is a self-contained effort separated organizationally from the larger entity, which possesses a clearly defined start and finish, a defined and allocated budget, and a set of plans, a performance management feedback system, and overarching goals or “framing assumptions” that define what constitutes the state of being “done.”

Oftentimes the terms “program” and “project” are used interchangeably, but the difference for these types of efforts is important and goes beyond a shallow understanding of the semantics. A program will also consider the lifecycle of the program: the follow-on logistics, the interrelationship of the end item to other components that will constitute the deployed system or systems, and any iterative efforts relating to improvement, revision, and modernization.

A word on the term “portfolio” is also worth a mention in the context of our theme. A portfolio is simply a summary of the projects or programs under an organizational entity that has both reporting and oversight responsibility for them. They may be interrelated or independent in their efforts, but all must report in some way, either due to fiduciary, resource, or oversight concerns, to that overarching entity.

The second concept relates to the term “public interest.” Programs and projects under this concept are those that must address the following characteristics: legality, governance, complexity, integrity, leadership, oversight, and subject matter expertise. I placed these in no particular order.

What we call in modern times “public interest” was originally called “public virtue” by the founders of the United States, which embody the ideals of the American Revolution, and upon which our experiment in democratic republicanism is built. It consists of conducting oneself in a manner in which the good of the whole—the public—outweighs personal interests and pursuits. Self-dealing need not apply.

This is no idealistic form of self-delusion: I understand, as do my colleagues, that we are, at heart, a commercial profit-making enterprise. But the manner in which we engage with government requires a different set of rules and many of these rules are codified in law and ethical practice. While others do not always feel obliged to live by these rules, we govern ourselves and so choose to apply these virtues—and to seek to support and change our system to encourage such behavior to as to be the norm—even in direct interactions with government personnel where we feel these virtues have been violated.

Characteristics of Public Interest Programs

Thus, the characteristics outlined above apply to program and project management in the public interest in the following manner:

Legality: That Public Interest Programs are an artifact of law and statute and are specifically designed to benefit the public as a whole.

At heart, program and project management are based on contractual obligations, whether those instruments apply internally or externally. As a result, everyone involved in the program and project management discipline is, by default, part of the acquisition community and the acquisition process. The law that applies to all government acquisition systems is based on the Federal Acquisition Regulation (FAR). There are also oversight and fiduciary responsibilities that apply as a result of the need for accountability under the Congressional appropriations process as well as ethical standards that apply, such as those under the Truth in Negotiations Act (TINA). While broad in the management flexibility they allow, violations of these statutes come with serious consequences. Thus, as a basis for establishing hard and fast guardrails in the management of programs and projects. Individual government agencies and military services also publish additional standards that supplement the legal requirements. An example is the Department of Defense FAR Supplement (DFARS). Commercial entities that hold government contracts in relation to Program Management Offices (PMOs) must sign on to both FAR and agency contractual clauses, which will then flow down to their subcontractors. Thus, the enforcement of these norms is both structured and consistent.

Governance: That the Organizational Structure and Disciplines deriving from Public Interest Programs are a result of both Contract and Regulatory Practice under the concept of Government Sovereignty.

The government and supplier PMOs are formed as a result of a contractual obligation for a particular purpose. Government contracting is unique since government entities are the sovereign. In the case of the United States, the sovereign is the elected government of the United States, which derives its legitimacy from the people of the United States as a whole. Constitutionally, the Executive Branch is tasked with the acquisition responsibility, but the manner and method of this responsibility is defined by statute.

Thus, during negotiations and unlike in commercial practice, the commercial entity is always the offeror and the United States always the party that either accepts or rejects the offer (the acceptor). This relationship has ramifications in contract enforcement and governance of the effort after award. It also allows the government to dictate the terms of the award through its solicitations. Furthermore, provisions from law establish cases where the burden for performance is on the entity (the supplier) providing the supplies and services.

Thus, the establishment of the PMO and oversight organizations have a legal basis, aside from considerations of best business practice. The details of governance within the bounds of legal guidance are those that apply through agency administrative law and regulation, oftentimes based on best business practice. These detailed practices of governance are usually established as a result of hard-learned experience: establishment of disciplines (systems engineering and technical performance, planning, performance management, cost control, financial execution, schedule, and progress assessment), the periodicity of reporting, the manner of oversight, the manner of liaison between the supplier and government PMOs, and alignment to the organization’s goals.

Complexity: That Public Interest Programs possess a level of both technical and organizational complexity unequaled in the private sector.

Program and project management in government involves a level of complexity rarely found in similar non-governmental commercial efforts. Aligning the contractual requirements, as an example, to an assessment of the future characteristics of a fighter aircraft needed to support the U.S. National Defense Strategy, built on the assessments by the intelligence agencies regarding future threats, is a unique aspect of government acquisition.

Furthermore, while relying on the expertise of private industry of such systems that support national defense, as well as those that support space exploration, energy, and a host of other needs, the items being acquired, which require cost type R&D contracts that involve program management, by definition are those where the necessary solutions are not readily available as commercial end items.

Oftentimes these requirements are built onto and extend existing off-the-shelf capabilities. But given that government investment in R&D represents the majority of this type of spending in the economy, absent it, technology and other efforts directed to meeting defense, economic, societal, climate, and space exploration challenges of the future would most likely not be met—or those that do will benefit only a portion of the populace. The federal government uniquely possesses the legal legitimacy, resources, and expertise to undertake such R&D that, pushing the envelope on capabilities, involves both epistemic and aleatory risk that can be managed through the processes of program management.

Integrity: The conduct of Public Interest Programs demands the highest level of commitment to a culture of accountability, impartiality, ethical conduct, fiduciary responsibility, democratic virtues, and honesty.

The first level of accountability resides in the conduct of the program manager, who is the locus of integrity within the program management office. This requires a focus on the duties the position demands as a representative of the Government of the United States. Furthermore, the program manager must ensure that the program team operate within the constraints established by the program’s or project’s contractual commitments, and that it continues to work to meeting the program goals that align with the stated interests and goals of the organization. That these duties are exercised regardless of self-interest is the basis of integrity.

This is not an easy discipline, and individuals oftentimes cannot separate their own interests from those of their duties. Yet, without this level of commitment, the legitimacy of the program office and the governmental enterprise itself is threatened.

In prior years, as an active-duty Supply Corps officer, I came across cases where individuals in civil service or among the commissioned officer community confused their own interests—for promotion, for self-aggrandizement, for ego—with those duties demanded of their rank or position. Such confusions of interests are serious transgressions. With contracted-out positions within program offices adding consulting and staffing firms into the mix, with their oftentimes diversified interests and portfolios, an additional layer of challenges is presented. Self-promotion, competition, and self-dealing have all too often become blatant, and program managers would do well to enforce strict rules regarding such behavior.

The pressures of exigency are oftentimes the main cause of the loss of integrity of the program or project. Personal interrelationships and human resource management issues can also undermine good order and discipline necessary for the program or project to organize itself into a cohesive, working team that is focused on a common vision.

Key elements mentioned in our opening thesis regarding ethical conduct, adherence to democratic virtues which include acceptance of all members of the team regardless of color, ethnicity, race, sexual identity, religion, or place of national origin. People deserve the respect and decency deriving from their basic human rights to enjoy human dignity, as well as of their position. Adding to these elements include honesty and the willingness to accept and report bad news, which is essential to integrity.

An organization committed to the principle of accountability will seek to measure and ensure that the goals of the program or project are being met, and that ameliorative measures are taken to correct any deficiencies. Since these efforts oftentimes involve years of effort involving significant sums of public monies, fiduciary integrity is essential to this characteristic.

All of these elements can and should exist in private, commercial practices. The difference that makes this a unique characteristic to program management in the public interest is the level of scrutiny, reporting, and review that is conducted: from oversight agencies within the Executive Department of the government, to the Congressional oversight, hearing and review processes, agency review, auditing and reporting, and inquires and critiques by the press and the public. Public interest program management is life in a fishbowl, except in the most secret efforts, and even those will eventually be subject to scrutiny.

As with a U.S. Navy ship that makes a port of call in a foreign country, the actions of the conduct of crew will not only reflect on themselves or their ship, but on the United States; so it is also with our program offices. Thus, systems of programmatic governance and business management must anticipate in their structure the level of adherence required. Given the inherent level of risk involved in these efforts, and given the normal amount of error human systems create even with good intentions and expertise, establishing a system committed to the elements of integrity creates a self-correcting one better prepared to meet the program’s or project’s challenges.

Leadership: Programs in the Public Interest differ from equivalent commercial efforts in that management systems and incentives based on profit- and shareholder-orientations do not exist. Instead, a special kind of skillset is required that includes good business management principles and skills combined with highly developed leadership traits.

Management skills tend to be a subset of leadership, though in business schools and professional courses they tend to be addressed as co-equal. This is understandable in commercial enterprises that focus on the capitalistic pressures regarding profit and market share.

Given the unique pressures imposed by the elements of integrity, the program manager and the program team are thrown into a situation that requires a focus on the achievement of organizational goals. In the case of program and project management, this will be expressed in the form of a set of “framing assumptions” that roll into an overarching vision.

A program office, of course, is more than a set of systems, practices, and processes. It is, first and foremost, a collection of individuals consisting of subject matter experts and professionals who must be developed into a team committed to the vision. The effort to achieve this team commitment is one of the more emotional and compelling elements that comprise leadership.

Human systems are adaptive ones, complex, which react and are created by both incentives and sanctions. Every group, especially involving creative and talented people, starts out being a collection of individuals with the interrelations among the members in an immature state. Underlying the expression of various forms of ambition and self-identification among mature individuals is the basic human need for social acceptance, born from the individual personal need for love. This motivation exists psychologically in all individuals except for sociopaths. It is also the basis for empathy and the acceptance of the autonomy of others, which form the foundation for team building.

The goal of the leader is to encourage maturity among the members of the group. The result is to create that overused term “synergy.” This is accomplished by doing those things as a leader necessary to develop members of the group that fosters trust, acceptance, and mutual respect. Admiral James L. Holloway, Jr., in his missive on Naval Leadership, instructed his young officers to eschew any concept of perfectionism in people. People make mistakes. We know this if we are to be brutally honest about our own experiences and actions.

Thus, intellectual honesty and an understanding on what motivates people within their cultural mores, above all else, is essential to good leadership. Americans, by nature, tend to be skeptical and independently minded. They require a level of explanation and due diligence that is necessary to win over their commitment to a goal or vision. When it comes to professionals operating within public service in government—who take an oath to the Constitution and our system of laws—the ability to lead tends to be more essential than just good management skills, though the latter are by no means unimportant. Management in private enterprise assumes a contentious workplace of competing values and interests, and oftentimes fosters it.

Program and project management in the public interest cannot succeed in such an environment. It requires a level of commitment to the goals of the effort regardless of personal values or interests among the individual members of the team. That they must be convinced to this level of commitment ensures that the values of leadership not only operate at the top of the management chain, but also at each of the levels and lateral relationships that comprise the team.

The shorthand for leadership in this culture is that the leader is “working their way out of their job,” and “that in order to be a good leader one must be a good follower,” meaning that all members of the team are well-informed, that their contributions, expertise and knowledge is acknowledged and respected, that individual points of failure through the irreplaceable person syndrome are minimized, and that each member of a team or sub-team can step in or step up to keep the operation functioning. The motivating concept in these situations are the interests of the United States, in lieu of a set of stockholders or some fiduciary reward.

Finally, there is the concept of the burden of leadership. Responsibility can be can be delegated, but accountability cannot. Leadership in this context entails an obligation to take responsibility for both the mission of the organization and the ethical atmosphere established in its governance.

Oversight: While the necessity for integrity anticipates the level of accountability, scrutiny, oversight, and reporting for Programs in the Public Interest, the environment this encompasses is unique compared to commercial entities.

The basis for acquisition at the federal level resides in the Article Two powers of the president as the nation’s Chief Executive. Congress, however, under its Article One powers, controls appropriations and passes laws related to the processes, procedures and management of the Executive Branch.

Flowing from these authorities, the agencies within the federal government have created offices for the oversight of the public’s money, the methods of acquisition of supplies and services, and the management of contracts. Contracting Officers are given authority through a warrant to exercise their acquisition authority under the guidance and management of a senior acquisition authority.

Unlike in private business, the government operates under the concept of Actual Authority. That is, no one may commit the government except those possessing a warrant. Program Managers are appointed to provide control and administration of cost type efforts, especially those containing R&D, to shepherd these efforts over the course of what usually constitutes a multi-year effort. The Contracting Officer and/or the senior acquisition authority in these cases will delegate contract administration authority to the Program Manager. As such, it is a very powerful position.

The inherent powers of the Executive Branch and the Legislative Branches of government create a tension that is resolved through a separation of powers and the ability of one branch to—at least in most cases—check the excesses and abuses of the other: the concept of checks and balances, especially through the operation of oversight.

When these tensions cannot be resolved within the processes established for separation of powers, the third branch of government becomes involved: this is the Judicial Branch. The federal judiciary has the ability to review all laws of the United States, their constitutionality, and their adherence to the letter of the law in the case of statute.

Wherever power exists within the federal government there exists systems of checks and balances. The reason for this is clear, and Lord Acton’s warning about power corrupting and absolute power corrupting absolutely is the operational concept.

Congress passes statutes and the Judiciary interprets the law, but it is up to the Executive Branch through the appointed heads of the various departments of government down through the civil service and, in the case of the Department of Defense, the military chain of command under civilian authority, to carry out the day-to-day activities in executing the laws and business of the government. This creates a large base of administrative law and procedure.

Administrative Law and the resulting procedures in their implementation come about due to the complexities in the statutes themselves, the tests of certain provisions of the statutes in the interplay between the various branches of government, and the practicalities of execution. This body of law and procedure is oftentimes confused with “regulation” in political discussions, but it is actually the means of ensuring that the laws are faithfully executed without undue political influence. It is usually supplemented by ethical codes and regulations as well.

As a part of this ecosystem, the Program in the Public Interest must establish a discipline related to self-regulation, due diligence, good business practice, fiduciary control, ethical and professional conduct, responsibility, and accountability. Just as the branches of the federal government are constructed to ensure oversight and checks-and-balances, this also exists with normative public administration within the Executive Branch agencies.

This is often referred to both positively and, mostly among political polemicists in the negative, as the bureaucracy. The development of bureaucracies in government is noted by historians and political scientists as an indication of political stability, maturity, and expertise. Without bureaucracies, governments tend to be capricious and their policies uncertain. The practice of stare decisis—the importance of precedent in legal decisions—is also part and parcel of stability. Government power can be beneficial or coercive. Resting action on laws and not the whims or desires of the individual person is essential to the good order and discipline of the federal government.

As such, program and project managers, given the extensive latitude and inherent powers of their position, are subject to rigorous reporting, oversight, and accountability regimes in the performance of their duties. In R&D cost-type program and project management efforts, the risk is shared between the supplier and the government. And the government flows down this same regime to the contractor to ensure the integrity of the effort in the expenditure of public monies and under the performance and delivery of public contacts.

This leads us to the last important aspect of oversight: public scrutiny, which also includes the press as the Fourth Estate. When I was a young Lieutenant in the Navy working in contracts the senior officer to whom I was assign often remarked: “Never do anything that would cause you to be ashamed were it to end up being read by your grandmother in the Washington Post.”

Unlike private business where law, contractual obligation, and fiduciary responsibility are the main pressures on tolerated behavior, the government and its actions are—and must be—under constant public scrutiny. It is expected. Senior managers who champ against the bit of this check on official conduct misunderstand their role. Even the appearance of malfeasance or abuse can cause one to steer into the rocks and shoals.

Subject Matter Expertise: Given the interrelated characteristics of legality, governance, complexity, integrity, leadership, and oversight—linked to the development of a professional, permanent bureaucracy acting through a non-partisan civil service—the practices necessary to successfully shepherd such efforts has produced areas of expertise and specialization. These areas provide a basis for leveraging technology in gaining insight into meeting all of the requirements necessary to the good administration and control of Program Management in the Public Interest.

The structures and practices of program and project management are reflected in the private economy. Some of this is contractually prescribed and some of it is based on best business practice learned through hard experience. In the interplay of government and industry, most often an innovation in one has been refined and improved in the other, only to find its way back to practice on the originating “side” of the transaction.

Initially in our history this cross-fertilization occurred through extraordinary wartime measures: standardization of rifled weaponry passed down by Thomas Jefferson and Eli Whitney, and for railroad track gauge standards issued by the Union government during the Civil War, are just two examples that turned out to provide a decisive advantage against laissez faire and libertarian approaches.

As the complexity of private business concerns, particularly in the international sphere, began to mimic—and in many cases surpass—the size and technical complexity of many individual government efforts, partnerships with civil authorities and private businesses saw the need for industry standardization for both electrical and non-electrical components and processes. The former was particularly important in the “Current Wars” between Edison and Westinghouse.

These simple and earlier examples highlight the great conundrum of standardization of supply, practice and procedure in acquisition: the need for economy through competition of many sources for any particular commodity or item weighed against the efficiency and interoperability needed to continue operations. Buying multiple individual items with the same function but produced using differing standards creates a nightmare of suboptimization. Overly restrictive standards can and have had the effect of reducing competition and stifling innovation, especially if the standard is proprietary.

In standards setting there are several interests involved that must be taken into account: the technical expertise (technical, qualitative, etc.) that underlies the standard, the public interest in ensuring a healthy marketplace that rewards innovation, diversity, and price competitiveness, the need for business-to-business cooperation and synergy in the marketplace, and the preponderance of practice, among others. In the Defense industry this also includes national security concerns.

This last consideration provides an additional level of tension between private industry and government interests. In the competition for market share and market niches, businesses are playing a zero-sum game that shifts between allies and competitors. Still, the interest of individual actors is focused on making a proprietary product or service dominant in the target market.

Government, on the other hand, particularly one that operates as a republic based on democratic processes and virtues and a commitment to equal rights, has a different set of interests that are, in many cases, diametrically opposed to those of individual players in the marketplace. Government needs and desires a broad choice of sources for what it needs, while ensuring that qualitative standards are met under a fair and reasonable price. When it does find innovation, it seeks to reward it, but only for the limited terms, conditions, and period of the contractual instrument.

The greater the risk in these cases—especially when cost risk is shared—the greater the need for standards, especially qualitative ones. The longer the term of the effort, the greater the need for checks and balances through evaluation, review, and oversight. The greater the dollar value, the greater importance for fiduciary and contractual accountability.

Thus, subject matter expertise has evolved over time, aligned with the functions and end items being developed and delivered. These areas include:

Estimating – A critical part of program and project management, this is a discipline with highly specialized quantitative methods for estimating and projecting project costs, resources, and duration. It is part of the planning phase prior to program or project inception. It can be used to support budget planning prior to program approval, during negotiations and, after award, to inform the project plan.

Systems Engineering – as described by the International Council of Systems Engineering, “a transdisciplinary and integrative approach to enable the successful realization, use, and retirement of engineered systems, using systems principles and concepts, and scientific, technological, and management methods.”

As it relates to program and project management, the technical documents related to providing the basis and structure of the lifecycle management of the end item application, including the application of technical standards, measures of effectiveness, measures of performance, key performance parameters, and technical performance measures. In simplistic terms, systems engineering defines when the item under R&D reaches the state of “done.”

Financial Management – at the program and project management level, the planning, organizing, directing and controlling the financial activities such as procurement and utilization of funds to adhere to the limitations of law and consistent with the terms and conditions of the contract and the its ancillary planning and execution documents.

At its core, financial management within this discipline includes the planning, programming, budgeting, and execution process for the financial requirements of successful program execution. As with any individual enterprise, cashflow for required activities with the right type of money determined by Congressional appropriation presents a unique and specialized skillset under program management in the public interest. Oftentimes the lack of funds necessary to address a particular programmatic risk or challenge can be just as decisive to program execution and success as any technical challenge.

Risk and Uncertainty – the concept of risk and uncertainty have evolved over time. Under classical economics (both Keynes and Knight), risk is where all of the future events and consequences of an action are known, but where specific outcomes are unknown. As such, probability calculus is applied to determine the risk management: mitigation and handling. Uncertainty, under this definition, is unknowable events that will result from our actions and is implicit in human action. There is no probability calculus or risk buy-down that can address areas of uncertainty. These definitions are also accepted under the concept of complexity economics.

My good colleague Glen Alleman (2013) at his blog, Herding Cats, casts risk as a product of uncertainty. This is a reordering of definitions, but not unuseful. Under Glen’s approach, uncertainty is broken into aleatory and epistemic uncertainty. The first—aleatory—comes from a random process, what Keynes, Knight, et al. would define as classical uncertainty. The second—epistemic—comes from lack of knowledge. The first is irreducible, which is consistent with classical economics and complexity economics; the second is subject to probability analysis and risk handling methodologies.

Both risk and uncertainty—aleatory and epistemic—occur within all phases and under each discipline within the project management environment. Any human action involves these forces of cause-and-effect and uncertainty—and limit our actions under the concept of “free will.”

Planning and Scheduling – usually these have been viewed as separate entities, but they are, in fact, part of a continuum, as are all of the disciplines mentioned, but more on that later in these blogs.

Planning involves the ability to derive the products of both the contract terms and conditions, and the systems engineering process. The purpose is to develop a high-level, time-phased plan that captures program events, deliverables, requirements, significant accomplishment criteria, and basic technical performance management achievement that will be the basis for a more detailed integrated master schedule.

The scheduling discipline is tasked with further delineating the summary tasks into schedule activities based on critical path methodology. A common refrain when I worked on the government side of program management was that you cannot eat an elephant in one gulp: you have to eat it one piece at a time.

As it relates to this portion of project methodology, I have, over the years, heard people say that planning and scheduling is more of an art instead of a science. Yet, the artifacts upon which our planning documents rest exist as part of the acquisition process and our systems and procedures are mature and largely standardized. The methods of systems engineering are precise and consistent.

The lexicon of planning and scheduling, regardless of the software applications or manual methods used, describe the same phenomenon and concepts, despite slightly different—and oftentimes proprietary—terminology. The concept of critical path analysis is well documented in the literature with slight, though largely insignificant, differences in application.

What appears as art is, in reality, a process that involves a great deal of complexity because these are the documents upon which all of the moving parts of the program are documented. Rather than art, it is a discipline that requires attention to detail and collaboration, aside from the power of computing.

Resource Management – as with planning and scheduling, resource management consists of a detailed accounting of the people, equipment, monies, and suppliers that are required to achieve the activities detailed in the program schedule.

In the detailed and specialized planning of projects and programs in the public interest, these efforts are cross-referenced and further delineated to the actual work that needs to be completed. A Work Breakdown Structure (or WBS), is the method of time-phasing the work using detailed tasks that integrate scope, cost, and schedule at the lowest level of achievement.

Baselining and Performance Management – are essential for project control in this environment. In this case, project and program schedule, cost, and resources are (ideally) risk adjusted and a performance management baseline is established: the basis for the assessment and control of the project.

This leads us to the methodology that is always on the cusp of being the Ozymandias of program management: earned value management or EVM. The discipline of EVM arose out of the Space Age era of the 1960s. The premise is simple: when undertaking any complex effort there is a finite amount of money and resources, and a target date for the needed end item. We need a method to determine whether the actual work performed in terms of budgeted resources and time is tracking to the plan to produce the desired end item application.

When looking at the utility of EVM, one must ask: while each of the disciplines noted above also track achievement over the lifecycle of the project or program, do any combine an analysis against budgeted time and resources? The answer is no, and so EVM is essential to management of these efforts.

Still, our other disciplines also track important information that is not captured by EVM. Thus, the entire corpus of our disciplines represents the project and program ecosystem. These processes, procedures, and the measures derived from them are interconnected. It is this salient fact that points us in the direction regarding the future of program management.

Conclusions from Part One

Given that we have outlined the unique and distinctive characteristics of public interest program management, the environment and basis upon which such program management rests, and the highly developed disciplines that have evolved as a result of the experience in system development, deployment, and lifecycle management, our inquiry must next explore the evolutionary nature of the program organization itself. Once identified and delineated, we must then determine the place of program organization within the context of developments in systems and information theory which will give us insight into the future of program management.

Ground Control from Major Tom — Breaking Radio Silence: New Perspectives on Project Management

Since I began this blog I have used it as a means of testing out and sharing ideas about project management, information systems, as well to cover occasional thoughts about music, the arts, and the meaning of wisdom.

My latest hiatus from writing was due to the fact that I was otherwise engaged in a different sort of writing–tech writing–and in exploring some mathematical explorations related to my chosen vocation, aside from running a business and–you know–living life.  There are only so many hours in the day.  Furthermore, when one writes over time about any one topic it seems that one tends to repeat oneself.  I needed to break that cycle so that I could concentrate on bringing something new to the table.  After all, it is not as if this blog attracts a massive audience–and purposely so.  The topics on which I write are highly specialized and the members of the community that tend to follow this blog and send comments tend to be specialized as well.  I air out thoughts here that are sometimes only vaguely conceived so that they can be further refined.

Now that that is out of the way, radio silence is ending until, well, the next contemplation or massive workload that turns into radio silence.

Over the past couple of months I’ve done quite a bit of traveling, and so have some new perspectives that and trends that I noted and would like to share, and which will be the basis (in all likelihood) of future, more in depth posts.  But here is a list that I have compiled:

a.  The time of niche analytical “tools” as acceptable solutions among forward-leaning businesses and enterprises is quickly drawing to a close.  Instead, more comprehensive solutions that integrate data across domains are taking the market and disrupting even large players that have not adapted to this new reality.  The economics are too strong to stay with the status quo.  In the past the barrier to integration of more diverse and larger sets of data was the high cost of traditional BI with its armies of data engineers and analysts providing marginal value that did not always square with the cost.  Now virtually any data can be accessed and visualized.  The best solutions, providing pre-built domain knowledge for targeted verticals, are the best and will lead and win the day.

b.  Along these same lines, apps and services designed around the bureaucratic end-of-month chart submission process are running into the new paradigm among project management leaders that this cycle is inadequate, inefficient, and ineffective.  The incentives are changing to reward actual project management in lieu of project administration.  The core fallacy of apps that provide standard charts based solely on user’s perceptions of looking at data is that they assume that the PM domain knows what it needs to see.  The new paradigm is instead to provide a range of options based on the knowledge that can be derived from data.  Thus, while the options in the new solutions provide the standard charts and reports that have always informed management, KDD (knowledge discovery in database) principles are opening up new perspectives in understanding project dynamics and behavior.

c.  Earned value is *not* the nexus of Integrated Project Management (IPM).  I’m sure many of my colleagues in the community will find this statement to be provocative, only because it is what they are thinking but have been hesitant to voice.  A big part of their hesitation is that the methodology is always under attack by those who wish to avoid accountability for program performance.  Thus, let me make a point about Earned Value Management (EVM) for clarity–it is an essential methodology in assessing project performance and the probability of meeting the constraints of the project budget.  It also contributes data essential to project predictive analytics.  What the data shows from a series of DoD studies (currently sadly unpublished), however, is that it is planning (via a Integrated Master Plan) and scheduling (via an Integrated Master Schedule) that first ties together the essential elements of the project, and will record the baking in of risk within the project.  Risk manifested in poorly tying contract requirements, technical performance measures, and milestones to the plan, and then manifested in poor execution will first be recorded in schedule (time-based) performance.  This is especially true for firms that apply resource-loading in their schedules.  By the time this risk translates and is recorded in EVM metrics, the project management team is performing risk handling and mitigation to blunt the impact on the performance management baseline (the money).  So this still raises the question: what is IPM?  I have a few ideas and will share those in other posts.

d.  Along these lines, there is a need for a Schedule (IMS) Gold Card that provides the essential basis of measurement of programmatic risk during project execution.  I am currently constructing one with collaboration and will put out a few ideas.

e.  Finally, there is still room for a lot of improvement in project management.  For all of the gurus, methodologies, consultants, body shops, and tools that are out there, according to PMI, more than a third of projects fail to meet project goals, almost half to meet budget expectations, less than half finished on time, and almost half experienced scope creep, which, I suspect, probably caused “failure” to be redefined and under-reported in their figures.  The assessment for IT projects is also consistent with this report, with CIO.com reporting that more than half of IT projects fail in terms of meeting performance, cost, and schedule goals.  From my own experience and those of my colleagues, the need to solve the standard 20-30% slippage in schedule and similar overrun in costs is an old refrain.  So too is the frustration that it need take 23 years to deploy a new aircraft.  A .5 CPI and SPI (to use EVM terminology) is not an indicator of success.  What this indicates, instead, is that there need to be some adjustments and improvements in how we do business.  The first would be to adjust incentives to encourage and reward the identification of risk in project performance.  The second is to deploy solutions that effectively access and provide information to the project team that enable them to address risk.  As with all of the points noted in this post, I have some other ideas in this area that I will share in future posts.

Onward and upward.

Days of Future Passed — Legacy Data and Project Parametrics

I’ve had a lot of discussions lately on data normalization, including being asked the question of what constitutes normalization when dealing with legacy data, specifically in the field of project management.  A good primer can be found at About.com, but there are also very good older papers out on the web from various university IS departments.  The basic principals of data normalization today consist of finding a common location in the database for each value, reducing redundancy, properly establishing relationships among the data elements, and providing flexibility so that the data can be properly retrieved and further processed into intelligence in such as way as the objects produced possess significance.

The reason why answering this question is so important is because our legacy data is of such a size and of such complexity that it falls into the broad category of Big Data.  The condition of the data itself provides wide variations in terms of quality and completeness.  Without understanding the context, interrelationships, and significance of the elements of the data, the empirical approach to project management is threatened, since being able to use this data for purposes of establishing trends and parametric analysis is limited.

A good paper that deals with this issue was authored by Alleman and Coonce, though it was limited to Earned Value Management (EVM).  I would argue that EVM, especially in the types of industries in which the discipline is used, is pretty well structured already.  The challenge is in the other areas that are probably of more significance in getting a fuller understanding of what is happening in the project.  These areas of schedule, risk, and technical performance measures.

In looking at the Big Data that has been normalized to date–and I have participated with others in putting a significant dent in this area–it is apparent that processes in these other areas lack discipline, consistency, completeness, and veracity.  By normalizing data in sub-specialties that have experienced an erosion in enforcing standards of quality and consistency, technology becomes a driver for process improvement.

A greybeard in IT project management once said to me (and I am not long in joining that category): “Data is like water, the more it flows downstream the cleaner it becomes.”  What he meant is that the more that data is exposed in the organizational stream, the more it is questioned and becomes a part of our closed feedback loop: constantly being queried, verified, utilized in decision making, and validated against reality.  Over time more sophisticated and reliable statistical methods can be applied to the data, especially if we are talking about performance data of one sort or another, that takes periodic volatility into account in trending and provides us with a means for ensuring credibility in using the data.

In my last post on Four Trends in Project Management, I posited that the question wasn’t more or less data but utilization of data in a more effective manner, and identifying what is significant and therefore “better” data.  I recently heard this line repeated back to me as a means of arguing against providing data.  This conclusion was a misreading of what I was proposing.  One level of reporting data in today’s environment is no more work than reporting on any other particular level of a project hierarchy.  So cost is no longer a valid point for objecting to data submission (unless, of course, the one taking that position must admit to the deficiencies in their IT systems or the unreliability of their data).

Our projects must be measured against the framing assumptions in which they were first formed, as well as the established measures of effectiveness, measures of performance, and measures of technical achievement.  In order to view these factors one must have access to data originating from a variety of artifacts: the Integrated Master Schedule, the Schedule and Cost Risk Analysis, and the systems engineering/technical performance plan.  I would propose that project financial execution metrics are also essential in getting a complete, integrated, view of our projects.

There may be other supplemental data that is necessary as well.  For example, the NDIA Integrated Program Management Division has a proposed revision to what is known as the Integrated Baseline Review (IBR).  For the uninitiated, this is a process in which both the supplier and government customer project teams can come together, review the essential project artifacts that underlie project planning and execution, and gain a full understanding of the project baseline.  The reporting systems that identify the data that is to be reported against the baseline are identified and verified at this review.  But there are also artifacts submitted here that contain data that is relevant to the project and worthy of continuing assessment, precluding manual assessments and reviews down the line.

We don’t yet know the answer to these data issues and won’t until all of the data is normalized and analyzed.  Then the wheat from the chaff can be separated and a more precise set of data be identified for submittal, normalized and placed in an analytical framework to give us more precise information that is timely so that project stakeholders can make decisions in handling any risks that manifest themselves during the window that they can be handled (or make the determination that they cannot be handled).  As the farmer says in the Chinese proverb:  “We shall see.”

Synchroncity — What is proper schedule and cost integration?

Much has been said about the achievement of schedule and cost integration (or lack thereof) in the project management community.  Much of it consists of hand waving and magic asterisks that hide the significant reconciliation that goes on behind the scenes.  From an intellectually honest approach that does not use the topic as a means of promoting a proprietary solution is that authored by Rasdorf and Abudayyeah back in 1991 entitled, “Cost and Schedule Control Integration: Issues and Needs.”

It is worthwhile revisiting this paper, I think, because it was authored in a world not yet fully automated, and so is immune to the software tool-specific promotion that oftentimes dominates the discussion.  In their paper they outlined several approaches to breaking down cost and work in project management in order to provide control and track performance.  One of the most promising methods that they identified at the time was the unified approach that had originated in aerospace, in which a work breakdown structure (WBS) is constructed based on discrete work packages in which budget and schedule are unified at a particular level of detail to allow for full control and traceability.

The concept of the WBS and its interrelationship to the organizational breakdown structure (OBS) has become much more sophisticated over the years, but there has been a barrier that has caused this ideal to be fully achieved.  Ironically it is the introduction of technology that is the culprit.

During the first phase of digitalization that occurred in the project management industry not too long after Radsdorf and Abudayyeah published their paper, there was a boom in dot coms.  For business and organizations the practice was to find a specialty or niche and fill it with an automated solution to take over the laborious tasks of calculation previously achieved by human intervention.  (I still have both my slide rule and first scientific calculator hidden away somewhere, though I have thankfully wiped square root tables from my memory).

For those of us who worked in project and acquisition management, our lives were built around the 20th century concept of division of labor.  In PM this meant we had cost analysts, schedule analysts, risk analysts, financial analysts and specialists, systems analysts, engineers broken down by subspecialties (electrical, mechanical, systems, aviation) and sub-subspecialties (Naval engineers, aviation, electronics and avionics, specific airframes, software, etc.).  As a result, the first phase of digitization followed the pathway of the existing specialties, finding niches in which to inhabit, which provided a good steady and secure living to software companies and developers.

For project controls, much of this infrastructure remains in place.  There are entire organizations today that will construct a schedule for a project using one set of specialists and the performance management baseline (PMB) in another, and then reconciling the two, not just in the initial phase of the project, but across the entire life of the project.  From the standard of the integrated structure that brings together cost and schedule this makes no practical sense.  From a business efficiency perspective this is an unnecessary cost.

As much as it is cited by many authors and speakers, the Coopers & Lybrand with TASC, Inc. paper entitled “The DoD Regulatory Cost Premium” is impossible to find on-line.  Despite its widespread citation the study demonstrated that by the time one got down to the third “cost” driver due to regulatory requirements that the projected “savings” was a fraction of 1% of the total contract cost.  The interesting issue not faced by the study is, were the tables turned, how much would such contracts be reduced if all management controls in the company were reduced or eliminated since they contribute as elements to overhead and G&A?  More to the point here, if the processes applied by industry were optimized what would the be the cost savings involved?

A study conduct by RAND Corporation in 2006 accurately points out that a number of studies had been conducted since 1986, all of which promised significant impacts in terms of cost savings by focusing on what were perceived as drivers for unnecessary costs.  The Department of Defense and the military services in particular took the Coopers & Lybrand study very seriously because of its methodology, but achieved minimal savings against those promised.  Of course, the various studies do not clearly articulate the cost risk associated with removing the marginal cost of oversight and regulation. Given our renewed experience with lack of regulation in the mortgage and financial management sectors of the economy that brought about the worst economic and financial collapse since 1929, one my look at these various studies in a new light.

The RAND study outlines the difficulties in the methodologies and conclusions of the studies undertaken, especially the acquisition reforms initiated by DoD and the military services as a result of the Coopers & Lybrand study.  But, how, you may ask does this relate to cost and schedule integration?

The present means that industry uses in many places takes a sub-optimized approach to project management, particularly when it applies to cost and schedule integration, which really consists of physical cost and schedule reconciliation.  A system is split into two separate entities, though they are clearly one entity, constructed separately, and then adjusted using manual intervention which defeats the purpose of automation.  This may be common practice but it is not best practice.

Government policy, which has pushed compliance to the contractor, oftentimes rewards this sub-optimization and provides little incentive to change the status quo.  Software industry manufacturers who are embedded with old technologies are all too willing to promote the status quo–appropriating the term “integration” while, in reality, offering interfaces and workarounds after the fact.  Those personnel residing in line and staff positions defined by the mid-20th century approach of division of labor are all too happy to continue operating using outmoded methods and tools.  Paradoxically these are personnel in industry that would never advocate using outmoded airframes, jet engines, avionics, or ship types.

So it is time to stop rewarding sub-optimization.  The first step in doing this is through the normalization of data from these niche proprietary applications and “rewiring” them at the proper level of integration so that the systemic faults can be viewed by all stakeholders in the oversight and regulatory chain.  Nothing seems to be more effective in correcting a hidden defect than some sunshine and a fresh set of eyes.

If industry and government are truly serious about reforming acquisition and project management in order to achieve significant cost savings in the face of tight budgets and increasing commitments due to geopolitical instability, then systemic reforms from the bottom up are the means to achieve the goal; not the elimination of controls.  As John Kennedy once said in paraphrasing Chesterton, “Don’t take down a fence unless you know why it was put up.”  The key is not to undermine the strength and integrity of the WBS-based approach to project control and performance measurement (or to eliminate it), but to streamline it so that it achieves its ideal as closely as our inherently faulty tools and methods will allow.

 

Go With the Flow — What is a Better Indicator: Earned Value or Cash Flow?

A lot of ink has been devoted to what constitutes “best practices” in PM but oftentimes these discussions tend to get diverted into overtly commercial activities that promote a set of products or trademarked services that in actuality are well-trod project management techniques given a fancy name or acronym.  We see this often with “road shows” and “seminars” that are blatant marketing events.  This tends to undermine the desire of PM professionals to find out what really gives us good information by both getting in the way of new synergies and by tying “best practices” to proprietary solutions.  All too often “common practice” and “proprietary limitations” pass for “best practice.”

Recently I have been involved in discussions and the formulation of guides on indicators that tell us something important regarding the condition of the project throughout its life cycle.  All too often the conversation settles on earned value with the proposition that all indicators lead back to it.  But this is an error since it is but one method for determining performance, which looks solely at one dimension of the project.

There are, after all, other obvious processes and plans that measure different dimensions of project performance.  The first such example is schedule performance.  A few years ago there was an attempt to more closely tie schedule and cost as an earned value metric, which was and is called “earned schedule.”  In particular, it had many strengths against what was posited as its alternative–schedule variance as calculated by earned value.  But both are a misnomer, even when earned schedule is offered as an alternative to earned value while at the same time adhering to its methods.  Neither measures schedule, that is, time-based performance against a plan consisting of activities.  The two artifacts can never be reconciled and reduced to one metric because they measure different things.  The statistical measure that would result would have no basis in reality, adding an unnecessary statistical layer that obfuscates instead of clarifying the underlying condition. So what do we look at you may ask?  Well–the schedule.  The schedule itself contains many opportunities to measure its dimension in order to develop useful metrics and indicators.

For example, a number of these indicators have been in place for quite some time: Baseline Execution Index (BEI), Critical Path Length Index (CPLI), early start/late start, early finish/late finish, bow-wave analysis, hit-miss indices, etc.  These all can be found in the literature, such as here and here and here.

Typically, then, the first step toward integration is tying these different metrics and indicators of the schedule and EVM dimensions at an appropriate level through the WBS or other structures.  The juxtaposition of these differing dimensions, particularly in a grid or GANTT, gives us the ability to determine if there is a correlation between the various indicators.  We can then determine–over time–the strength and consistency of the various correlations.  Further, we can take this one step further to conclude which ones lead us to causation.  Only then do we get to “best practice.”  This hard work to get to best practice is still in its infancy.

But this is only the first step toward “integrated” performance measurement.  There are other areas of integration that are needed to give us a multidimensional view of what is happening in terms of project performance.  Risk is certainly one additional area–and a commonly heard one–but I want to take this a step further.

For among my various jobs in the past included business management within a project management organization.  This usually translated into financial management, but not traditional financial management that focuses on the needs of the enterprise.  Instead, I am referring to project financial management, which is a horse of a different color, since it is focused at the micro-programmatic level on both schedule and resource management, given that planned activities and the resources assigned to them must be funded.

Thus, having the funding in place to execute the work is the antecedent and, I would argue, the overriding factor to project success.  Outside of construction project management, where the focus on cash-flow is a truism, we see this play out in publicly funded project management through the budget hearing process.  Even when we are dealing with multiyear R&D funding the project goes through this same process.  During each review, financial risk is assessed to ensure that work is being performed and budget (program) is being executed.  Earned value will determine the variance between the financial plan and the value of the execution, but the level of funding–or cash flow–will determine what gets done during any particular period of time.  The burn rate (expenditure) is the proof that things are getting done, even if the value may be less than what is actually expended.

In public funding of projects, especially in A&D, the proper “color” of money (R&D, Operations & Maintenance, etc.) available at the right time oftentimes is a better predictor of project success than the metrics and indicators which assume that the planned budget, schedule, and resources will be provided to support the baseline.  But things change, including the appropriation and release of funds.  As a result, any “best practice” that confines itself to only one or two of the dimensions of project assessment fail to meet the definition.

In the words of Gus Grissom in The Right Stuff, “No bucks, no Buck Rogers.”

 

I Can’t Get No (Satisfaction) — When Software Tools Go Bad

Another article I came across a couple of weeks ago that my schedule prevented me from highlighting was by Michelle Symonds at PM Hut entitled “5 Tell-Tale Signs That You Need a Better Project Management Tool.”  According to Ms. Symonds, among these signs are:

a.  Additional tools are needed to achieve the intended functionality apart from the core application;

b.  Technical support is poor or nonexistent;

c.  Personnel in the organization still rely on spreadsheets to extend the functionality of the application;

d.  Training on the tool takes more time than training the job;

e.  The software tool adds work instead of augmenting or facilitating the achievement of work.

I have seen situations where all of these conditions are at work but the response, in too many cases, has been “well we put so much money into XYZ tool with workarounds and ‘bolt-ons’ that it will be too expensive/disruptive to change.”  As we have advanced past the first phases of digitization of data, it seems that we are experiencing a period where older systems do not quite match up with current needs, but that software manufacturers are very good at making their products “sticky,” even when their upgrades and enhancements are window dressing at best.

In addition, the project management community, particularly focused on large projects in excess of $20M is facing the challenge of an increasingly older workforce.  Larger economic forces at play lately have exacerbated this condition.  Aggregate demand and, on the public side, austerity ideology combined with sequestration, has created a situation where highly qualified people are facing a job market characterized by relatively high unemployment, flat wages and salaries, depleted private retirement funds, and constant attacks on social insurance related to retirement.  Thus, people are hanging around longer, which limits opportunities for newer workers to grow into the discipline.  Given these conditions, we find that it is very risky to one’s employment prospects to suddenly forge a new path.  People in the industry that I have known for many years–and who were always the first to engage with new technologies and capabilities–are now very hesitant to do so now.  Some of this is well founded through experience and consists of healthy skepticism: we all have come across snake oil salesmen in our dealings at one time or another, and even the best products do not always make it due to external forces or the fact that brilliant technical people oftentimes are just not very good at business.

But these conditions also tend to hold back the ability of the enterprise to implement efficiencies and optimization measures that otherwise would be augmented and supported by appropriate technology.  Thus, in addition to those listed by Ms. Symonds, I would include the following criteria to use in making the decision to move to a better technology:

a.  Sunk and prospective costs.  Understand and apply the concepts of sunk cost and prospective cost.  The first is the cost that has been expended in the past, while the latter focuses on the investment necessary for future growth, efficiencies, productivity, and optimization.  Having made investments to improve a product in the past is not an argument for continuing to invest in the product in the future that trumps other factors.  Obviously, if the cash flow is not there an organization is going to be limited in the capital and other improvements it can make but, absent those considerations, sunk cost arguments are invalid.  It is important to invest in those future products that will facilitate the organization achieving its goals in the next five or ten years.

b.  Sustainability.  The effective life of the product must be understood, particularly as it applies to an organization’s needs.  Some of this overlaps the points made by Ms. Symonds in her article but is meant to apply in a more strategic way.  Every product, even software, has a limited productive life but my concept here goes to what Glen Alleman pointed out in his blog as “bounded applicability.”  Will the product require more effort in any form where the additional effort provides a diminishing return?  For example, I have seen cases where software manufacturers, in order to defend market share, make trivial enhancements such as adding a chart or graph in order to placate customer demands.  The reason for this should be, but is not always obvious.  Oftentimes more substantive changes cannot be made because the product was built on an earlier generation operating environment or structure.  Thus, in order to replicate the additional functionality found by newer products the application requires a complete rewrite.  All of us operating in this industry has seen this; where a product that has been a mainstay for many years begins to lose market share.  The decision, when it is finally made, is to totally reengineer the solution, but not as an upgrade to the original product, arguing that it is a “new” product.  This is true in terms of the effort necessary to keep the solution viable, but that then also completely undermines justifications based on sunk costs.

c.  Flexibility.  As stated previously in this blog, the first generation of digitization mimicked those functions that were previously performed manually.  The applications were also segmented and specialized based on traditional line and staff organizations, and specialties.  Thus, for project management, we have scheduling applications for the scheduling discipline (such as it is), earned value engines for the EV discipline, risk and technical performance applications for risk specialists and systems engineers, analytical software for project and program analysts, and financial management applications that subsumed project and program management financial management professionals.  This led to the deployment of so-called best-of-breed configurations, where a smorgasbord of applications or modules were acquired to meet the requirements of the organization.  Most often these applications had and have no direct compatibility, requiring entire staffs to reconcile data after the fact once that data was imported into a proprietary format in which it could be handled.  Even within so-called ERP environments under one company, direct compatibility at the appropriate level of the data being handled escaped the ability of the software manufacturers, requiring “bolt-ons” and other workarounds and third party solutions.  This condition undermines sustainability, adds a level of complexity that is hard to overcome, and adds a layer of cost to the life-cycle of the solutions being deployed.

The second wave to address some of these limitations focused on data flexibility using cubes, hard-coding of relational data and mapping, and data mining solutions: so-called Project Portfolio Management (PPM) and Business Intelligence (BI).  The problem is that, in the first instance, PPM simply another layer to address management concerns, while early BI systems froze in time single points of failure into hard-coded deployed solutions.

A flexible system is one that leverages the new advances in software operating environments to solve more than one problem.  This, of course, undermines the financial returns in software, where the pattern has been to build one solution to address one problem based on a specialty.  Such a system provides internal flexibility, that is, allows for the application of objects and conditional formatting without hardcoding, pushing what previously had to be accomplished by coders to the customer’s administrator or user level; and external flexibility, where the same application can address, say, EVM, schedule, risk, financial management, KPIs, technical performance, stakeholder reporting, all in the same or in multiple deployed environments without the need for hardcoding.  In this case the operating environment and any augmented code provides a flexible environment to the customer that allows one solution to displace multiple “best-of-breed” applications.

This flexibility should apply not only vertically but also horizontally, where data can be hierarchically organized to allow not only for drill-down, but also for roll-up.  Data in this environment is exposed discretely, providing to any particular user that data, aggregated as appropriate, based on their role, responsibility, or need to know.

d.  Interoperability and open compatibility.  A condition of the “best-of-breed” deployment environment is that it allows for sub-optimization to trump organizational goals.  The most recent example that I have seen of this is where it is obvious that the Integrated Master Schedule (IMS) and Performance Management Baseline (PMB) were obviously authored by different teams in different locations and, most likely, were at war with one another when they published these essential interdependent project management artifacts.

But in terms of sustainability, the absence of interoperability and open compatibility has created untenable situations.  In the example of PMB and IMS information above, in many cases a team of personnel must be engaged every month to reconcile the obvious disconnectedness of schedule activities to control accounts in order to ensure traceability in project management and performance.  Surely, not only should there be no economic rewards for such behavior, I believe that no business would perform in that manner without them.

Thus, interoperability in this case is to be able to deal with data in its native format without proprietary barriers that prevent its full use and exploitation to the needs and demands of the customer organization.  Software that places its customers in a corner and ties their hands in using their own business information has, indeed, worn out its welcome.

The reaction of customer organizations to the software industry’s attempts to bind them to proprietary solutions has been most marked in the public sector, and most prominently in the U.S. Department of Defense.  In the late 1990s the first wave was to ensure that performance management data centered around earned value was submitted in a non-proprietary format known as the ANSI X12 839 transaction set.  Since that time DoD has specified the use of the UN/CEFACT XML D09B standard for cost and schedule information, and it appears that other, previously stove-piped data will be included in that standard in the future.  This solution requires data transfer, but it is one that ensures that the underlying data can be normalized regardless of the underlying source application.  It is especially useful for stakeholder reporting situations or data sharing in prime and sub-contractor relationships.

It is also useful for pushing for improvement in the disciplines themselves, driving professionalism.  For example, in today’s project management environment, while the underlying architecture of earned value management and risk data is fairly standard, reflecting a cohesiveness of practice among its practitioners, schedule data tends to be disorganized with much variability in how common elements are kept and reported.  This also reflects much of the state of the scheduling discipline, where an almost “anything goes” mentality seems to be in play reflecting not so much the realities of scheduling practice–which are pretty well established and uniform–as opposed to the lack of knowledge and professionalism on the part of schedulers, who are tied to the limitations and vagaries of their scheduling application of choice.

But, more directly, interoperability also includes the ability to access data (as opposed to application interfacing, data mining, hard-coded Cubes, and data transfer) regardless of the underlying database, application, and structured data source.  Early attempts to achieve interoperability and open compatibility utilized ODBC but newer operating environments now leverage improved OLE DB and other enhanced methods.  This ability, properly designed, also allows for the deployment of transactional environments, in which two-way communication is possible.

A new reality.  Thus given these new capabilities, I think that we are entering a new phase in software design and deployment, where the role of the coder in controlling the UI is reduced.  In addition, given that the large software companies have continued to support a system that ties customers to proprietary solutions, I do not believe that the future of software is in open source as so many prognosticators stated just a few short years ago.  Instead, I propose that applications that behave like open source but allow those who innovate and provide maximum value, sustainability, flexibility, and interoperability to the customer are those that will be rewarded for their efforts.

Note:  This post was edited for clarity and grammatical errors from the original.

 

Don’t Be Cruel — Contract Withholds and the Failure of Digital Systems

Recent news over at Breaking Defense headlined a $25.7M withhold to Pratt & Whitney for the F135 engine.  This is the engine for the F35 Lightning II aircraft, also known as the Joint Strike Fighter (JSF).  The reason for the withhold in this particular case was for an insufficient cost and schedule business system that the company has in place to support project management.

The enforcement of withholds for deficiencies in business systems was instituted in August 2011.  These business systems include six areas:

  • Accounting
  • Estimating
  • Purchasing
  • EVMS (Earned Value Management System)
  • MMAS (Material Management and Accounting System), and
  • Government Property

As of November 30, 2013, $19 million had been held back from BAE Systems Plc, $5.2 million from Boeing Co., and $1.4 million Northrop Corporation.  These were on the heels of a massive $221 million held back from Lockheed Martin’s aeronautics unit for its deficient earned value management system.  In total, fourteen companies were impacted by withholds last year.

For those unfamiliar with the issue, these withholds may seem to be a reasonable enforcement mechanism that sufficient business systems are in place in order to ensure that there is traceability in the expenditure of government funds by contractors.  After all, given the disastrous state of affairs where there was massive loss of accountability by contractors in Iraq and Afghanistan, many senior personnel in DoD felt that there needed to be teeth given to contracting officer, and what better way to do this than through financial withholds?  The rationale is that if the systems are not adequate then the information originated from these systems is not credible.

This is probably a good approach for the acquisition of wartime goods and services, but doesn’t seem to fit the reality of the project management environment in which government contracting operates.  The strongest objections to the rule, I think, came from the legal community, most notably from the Bar Association’s Section of Public Contract Law.  Among these was that the amount of the withhold is based on an arbitrary percentage within the DFARS rule.  Another point made is that the defects in the systems in most cases are disconnected from actual performance and so redirect attention and resources away from the contractual obligation at hand.

These objections were made prior to the rule’s acceptance.  But now that the rule is being enforced the more important question is the effect of the withholds on project management.  My own anecdotal experience from having been a business manager in a program management staff is that the key to project success is oftentimes determined by cash flow.  While internal factors to the project, such as the effective construction of the integrated master schedule (IMS), performance management baseline (PMB), risk identification and handling, and performance tracking against these plans are the primary focus of project integrity, all too often the underlying financial constraints in which the project must operate is treated as a contingent factor.  If our capabilities due to financial constraints are severe, then the best plan in the world will not achieve the desired results because it fails to be realistic.

The principles that apply to any entrepreneurial enterprise also apply to complex projects.  It is true that large companies do have significant cash reserves and that these reserves have grown significantly since the 2007-2010 depression.  But a major program requires a large infusion of resources that constitutes a significant barrier to entry, and so such reserves contribute to the financial stability necessary to undertake such efforts.  Profit is not realized on every project.  This may sound surprising to those unfamiliar with public administration, but this is the case because it sometimes is worth breaking even or taking a slight loss so as not to lose essential organizational knowledge.  It takes years to develop an engineer who understands the interrelationships of the key factors in a jet fighter: the tradeoffs between speed, maneuverability, weight, and stress from the operational environment, like taking off from and landing on a large metal aircraft carrier that travels on salt water.  This cannot be taught in college nor can it be replaced if the knowledge is lost due to budget cuts, pay freezes, and sequestration.  Oftentimes, because of their size and complexity, project start-up costs much be financed using short term loans, adding risk when payments are delayed and work interrupted.  The withhold rule adds an additional, if not unnecessary, dimension of risk to project success.

Given that most of the artifacts that are deemed necessary to handle and reduce risk are done in a collaborative environment by the contractor-government project team through the Integrated Baseline Review (IBR) process and system validation–as well as pre-award certifications–it seems that there is no clear line of demarcation to place the onus of inadequate business systems on the contractor.  The reality of the situation, given cost-plus contracts for development contracts, is that industry is, in fact, a private extension of the defense infrastructure.

It is true that a line must be drawn in the contractual relationship to provide those necessary checks and balances to guard against fraud, waste, or a race to the lowest common denominator that undermines accountability and execution of the contractual obligation.  But this does not mean that the work of oversight requires another post-hoc layer of surveillance.  If we are not getting quality results from pre-award and post-award processes, then we must identify which ones are failing us and reform them.  Interrupting cash flow for inadequately assessed business systems may simply be counter-productive.

As Deming would argue, quality must be built into the process.  What defines quality must also be consistent.  That our systems are failing in that regard is indicative, I believe, in a failure of imagination on the part of our digital systems, on which most business systems rely.  It was fine in the first wave of microcomputer digitization in the 1980s and 1990s to simply design systems that mimicked the structure of pre-digital information specialization.  Cost performance systems were built to serve the needs of cost analysts, scheduling systems were designed for schedulers, risk systems for a sub-culture of risk specialists, and so on.

To break these stovepipes the response of the IT industry twofold, which constitutes the second wave of digitization of project management business processes.

One was in many ways a step back.  The mainframe culture in IT had been on the defensive since the introduction of the PC and “distributed” processing.  Here was an opening to reclaim the high ground and so expensive, hard-coded ERP, PPM, and BI systems were introduced.  The lack of security in deploying systems quickly in the first wave also provided the community with a convenient stalking horse, though even the “new” systems, as we have seen, lack adequate security in the digital arms race.  The ERP and BI systems are expensive and require specialized knowledge and expertise.  Solutions are hard-coded, require detailed modeling, and take a significant amount of time to deploy, supporting a new generation of coders.  The significant financial and personnel resources required to acquire and implement these systems–and the management reputation on the line for making the decision to acquire the systems in the first place–have become a rationale for their continued use, even when they fail at the same high rate of all IT development projects.  Thus, tradeoff analysis between sunk costs and prospective costs is rarely made in determining their sustainability.

Another response was to knit together the first wave, specialized systems in “best-of-breed” configurations.  In this case data is imported and reconciled between specialized systems to achieve integration needed to service the cross-functional nature of project management.  Oftentimes the estimating, IMS, PMB, and qualitative and quantitative risk artifacts are constructed by separate specialists with little or no coordination or fidelity.  These environments are characterized by workarounds, special resource-heavy reconciliation teams dedicated to verifying data between systems, the expenditure of resources in fixing errors after the fact, and in the development of Access and MS Excel-heavy one-off solutions designed to address deficiencies in the underlying systems.

That the deficiencies that are found in the solutions described above are mimicked in the findings of the deficiencies in business systems marks the culprits largely being the underlying information systems.  The solution, I think, is going to come from those portions of the digital community where the barriers to entry are low.  The current crop of software in place is reaching the end of its productive life from the first and second waves.  Hoping to protect market share and stave off the inevitable, new delivery and business models are being deployed by entrenched software companies, who have little incentive to drive the industry to the next phase.  Instead, they have been marketing SaaS and cloud computing as the panacea, though the nature of the work tends to militate against acceptance of external hosting.  In the end, I believe the answer is to leverage new technologies that eliminate the specialized and hard-coded nature of the first example, but achieve integration, while leveraging the existing historical data that exists in great abundance from the second example.

Note: The title and some portions of this post were modified from the original for clarity.

 

 

 

What’s Your Number (More on Metrics)

Comments from my last post mainly centered around: but are you saying that we shouldn’t do assessments or analysis?  No.  But it is important to define our terms a bit better and realize that what we monitor is not equal and not measuring the same kind of thing.

As I have written here, our metrics fall into categories but each has a different role or nature and are generally rooted in two concepts.  These concepts are quality assurance (QA) and quality control (QC)–and they are not one and the same.  As our specialties have fallen away over time, the distinction between QA and QC has been lost.  The evidence of this confusion can be found not only in Wikipedia here and here, but also in project management discussion groups such as here and here.

QA measures the quality of the processes involved in the development and production of an end item.  It tends to be a proactive process and, therefore, looks for early warning indicators.

QC measures the quality in the products.  It is a reactive process and is focused on defect correction.

A large part of the confusion as it relates to project management is that QA and QC has its roots in iterative, production-focused activities.  So knowing which subsystems within the overall project management system we are measuring is important in understanding whether it serves a QA or QC purpose, that is, that is has a QA or QC effect.

Generally, in management, we categorize our metrics into groupings based on their purpose.  There are Key Performance Indicators (KPIs), which are categorized as diagnostic indicators, lagging indicators, and leading indicators.  There are Key Risk Indicators (KRIs), which measure future adverse impacts.  KRIs are qualitative and quantitative measures that must be handled or mitigated in our plans.

KPIs and KRIs can serve QA and QC purposes and it is important to know the difference so that we can understand what the metric is telling us.  The dichotomy between these effects is not closed.  QC is meant to drive improvements in our processes so that we can shift (ideally) to QA measures in ensuring that our processes will produce a high quality product.

When it comes to the measurement of project management artifacts, our metrics regarding artifact quality, such those applied to the Integrated Master Schedule (IMS) is actually a measure rooted in QC.  The defect has occurred in a product (the IMS) and now we must go back and fix it.  It is not that QC is not an essential function.

It just seems to me that we are sophisticated enough now to establish systems in the construction of the IMS and other artifacts, that is, to be proactive (avoiding errors), in lieu of being reactive (fixing errors).  And–yes–at the next meetings and conferences I will present some ideas on how to do that.