Shake it Out – Embracing the Future of Program Management – Part Two: Private Industry Program and Project Management in Aerospace, Space, and Defense

In my previous post, I focused on Program and Project Management in the Public Interest, and the characteristics of its environment, especially from the perspective of the government program and acquisition disciplines. The purpose of this exploration is to lay the groundwork for understanding the future of program management—and the resulting technological and organizational challenges that are required to support that change.

The next part of this exploration is to define the motivations, characteristics, and disciplines of private industry equivalencies. Here there are commonalities, but also significant differences, that relate to the relationship and interplay between public investment, policy and acquisition, and private business interests.

Consistent with our initial focus on public interest project and program management (PPM), the vertical with the greatest relationship to it is found in the very specialized fields of aerospace, space, and defense. I will therefore first begin with this industry vertical.

Private Industry Program and Project Management

Aerospace, Space & Defense (ASD). It is here that we find commercial practice that comes closest to the types of structure, rules, and disciplines found in public interest PPM. As a result, it is also here where we find the most interesting areas of conflict and conciliation between private motivations and public needs and duties. Particularly since most of the business activity in this vertical is generated by and dependent on federal government acquisition strategy and policy.

On the defense side, the antecedent policy documents guiding acquisition and other measures are the National Security Strategy (NSS), which is produced by the President’s staff, the National Defense Strategy (NDS), which further translates and refines the NSS, and the National Military Strategy (NMS), which is delivered to the Secretary of Defense by the Joint Chiefs of Staff of the various military services, which is designed to provide unfettered military advise to the Secretary of Defense.

Note that the U.S. Department of Defense (DoD) and the related agencies, including the intelligence agencies, operate under a strict chain of command that ensures civilian control under the National Military Establishment. Aside from these structures, the documents and resulting legislation from DoD actions also impact such civilian agencies as the Department of Energy (DOE), Department of Homeland Security (DHS), the National Aeronautics and Space Administration (NASA), and the Federal Aviation Administration (FAA), among others.

The countervailing power and checks-and-balances on this Executive Branch power lies with the appropriation and oversight powers of the Congress. Until the various policies are funded and authorized by Congress, the general tenor of military, intelligence, and other operations have tangential, though not insignificant effects, on the private economy. Still, in terms of affecting how programs and projects are monitored, it is within the appropriation and authorization bills that we find the locus of power. As one of my program managers reminded me during my first round through the budget hearing process, “everyone talks, but money walks.”

On the Aerospace side, there are two main markets. One is related to commercial aircraft, parts, and engines sold to the various world airlines. The other is related to government’s role in non-defense research and development, as well as activities related to private-public partnerships, such as those related to space exploration. The individual civilian departments of government also publish their own strategic plans based on their roles, from which acquisition strategy follows. These long terms strategic plans, usually revised at least every five years, are then further refined into strategic implementation plans by various labs and directorates.

The suppliers and developers of the products and services for government, which represents the bulk of ASD, face many of the same challenges delineated in surveying their government counterparts. The difference, of course, is that these are private entities where the obligations and resulting mores are derived from business practice and contractual obligations and specifications.

This is not to imply a lack of commitment or dedication on the part of private entities. But it is an important distinction, particularly since financial incentives and self-interest are paramount considerations. A contract negotiator, for example, in order to be effective, must understand the underlying pressures and relative position of each of the competitors in the market being addressed. This individual should also be familiar with the particular core technical competencies of the competitors as well as their own strategic plans, the financial positions and goals that they share with their shareholders in the case of publicly traded corporations, and whether actual competition exists.

The Structure of the Market. Given the mergers and acquisitions of the last 30 years, along with the consolidation promoted by the Department of Defense as unofficial policy after the fall of the Berlin Wall and the lapse of antitrust enforcement, the portion of ASD and Space that rely on direct government funding, even those that participate in public-private ventures where risk sharing is involved, operate in a monopsony—the condition in which a single buyer—the U.S. government—substantially controls the market as the main purchaser of supplies and services. This monopsony market is then served by a supplier market that is largely an oligopoly—where there are few suppliers and limited competition—and where, in some technical domains, some suppliers exert monopoly power.

Acknowledging this condition informs us regarding the operational motivators of this market segment in relation to culture, practice, and the disciplines and professions employed.

In the first case, given the position of the U.S. government, the normal pressures of market competition and market incentives do not apply to the few competitors participating in the market. As a result, only the main buyer has the power to recreate, in an artificial manner, an environment which replicate the market incentives and penalties normally employed in a normative, highly diverse and competitive market.

Along these lines, for market incentives, the government can, and often does, act as the angel investor, given the rigorous need for R&D in such efforts. It can also lower the barriers to participation in order to encourage more competition and innovation. This can be deployed across the entire range of limited competitors, or it can be expansive in its approach to invite new participants.

Market penalties that are recreated in this environment usually target what economists call “rent-seeking behavior.” This is a situation where there may be incumbents that seek to increase their own wealth without creating new benefits, innovation, or providing additional wealth to society. Lobbying, glad-handing, cronyism, and other methods are employed and, oftentimes, rampant under monosponistic systems. Revolving-door practices, in which the former government official responsible for oversight obtains employment in the same industry and, oftentimes, with the same company, is too often seen in these cases.

Where there are few competitors, market participants will often play follow-the-leader and align themselves to dominate particular segments of the market in appealing to the government or elected representatives for business. This may mean that, in many cases, they team with their ostensible competitors to provide a diverse set of expertise from the various areas of specialty. As with any business, profitability is of paramount importance, for without profit there can be no business operations. It is here: the maximization of profit and shareholder value, that is the locus of power in understanding the motivation of these and most businesses.

This is not a value judgment. As faulty and risky as this system may be, no better business structure has been found to provide value to the public through incentives for productive work, innovation, the satisfaction of demand, and efficiency. The challenge, apart from what political leadership decides to do regarding the rules of the market, is to make those rules that do exist work in the public interest through fair, ethical, and open contracting practices.

To do this successfully requires contracting and negotiating expertise. To many executives and non-contracting personnel, negotiations appear to be a zero-sum game. No doubt, popular culture, mass media and movies, and self-promoting business people help mold this perception. Those from the legal profession, in particular, deal with a negotiation as an extension of the adversarial processes through which they usually operate. This is understandable given their education, and usually disastrous.

As an attorney friend of mine once observed: “My job, if I have done it right, is to ensure that everyone walking out of the room is in some way unhappy. Your job, in contrast, is to ensure that everyone walking out of it is happy.” While a generalization—and told tongue-in-cheek—it highlights the core difference in approach between these competing perspectives.

A good negotiator has learned that, given two motivated sides coming together to form a contract, that there is an area of intersection where both parties will view the deal being struck as meeting their goals, and as such, fair and reasonable. It is the job of the negotiator to find that area of mutual fairness, while also ensuring that the contract is clear and free of ambiguity, and that the structure of the instrument—price and/or cost, delivery, technical specification, statement of work or performance specification, key performance parameters, measures of performance, measures of effectiveness, management, sufficiency of capability (responsibility), and expertise—sets up the parties involved for success. A bad contract can no more be made good than the poorly prepared and compacted soil and foundation of a house be made good after the building goes up.

The purpose of a good contract is to avoid litigation, not to increase the likelihood of it happening. Furthermore, it serves the interests of neither side to obtain a product or service at a price, or under such onerous conditions, where the enterprise fails to survive. Alternatively, it does a supplier little good to obtain a contract that provides the customer with little financial flexibility, that fails to fully deliver on its commitments, that adversely affects its reputation, or that is perceived in a negative light by the public.

Effective negotiators on both sides of the table are aware of these risks and hazards, and so each is responsible for the final result, though often the power dynamic between the parties may be asymmetrical, depending on the specific situation. It is one of the few cases in which parties having both mutual and competing interests are brought together where each side is responsible for ensuring that the other does not hazard their organization. It is in this way that a contract—specifically one that consists of a long-term R&D cost-plus contract—is much like a partnership. Both parties must act in good faith to ensure the success of the project—all other considerations aside—once the contract is signed.

In this way, the manner of negotiating and executing contracts is very much a microcosm of civil society as a whole, for good or for bad, depending on the practices employed.

Given that the structure of aerospace, space, and defense consists of one dominant buyer with few major suppliers, the disciplines required relate to the details of the contract and its resulting requirements that establish the rules of governance.

As I outlined in my previous post, the characteristics of program and project management in the public interest, which are the products of contract management, are focused on successfully developing and obtaining a product to meet particular goals of the public under law, practice, and other delineated specific characteristics.

As a result, the skill-sets that are of paramount importance to business in this market prior to contract award are cost estimating, applied engineering expertise including systems engineering, financial management, contract negotiation, and law. The remainder of disciplines regarding project and program management expertise follow based on what has been established in the contract and the amount of leeway the contracting instrument provides in terms of risk management, cost recovery, and profit maximization, but the main difference is that this approach to the project leans more toward contract management.

Another consideration in which domains are brought to bear relates to position of the business in terms of market share and level of dominance in a particular segment of the market. For example, a company may decide to allow a lower than desired target profit. In the most extreme cases, the company may allow the contract to become a loss leader in order to continue to dominate a core competency or to prevent new entries into that portion of the market.

On the other side of the table, government negotiators are prohibited by the Federal Acquisition Regulation (the FAR) from allowing companies to “buy-in” by proposing an obviously lowball offer, but some do in any event, whether it is due to lack of expertise or bowing to the exigencies of price or cost. This last condition, combined with rent-seeking behavior mentioned earlier, where they occur, will distort and undermine the practices and indicators needed for effective project and program management. In these cases, the dysfunctional result is to create incentives to maximize revenue and scope through change orders, contracting language ambiguity, and price inelasticity. This also creates an environment that is resistant to innovation and rewards inefficiency.

But apart from these exceptions, the contract and its provisions, requirements, and type are what determine the structure of the eventual project or program management team. Unlike the commercial markets in which there are many competitors, the government through negotiation will determine the manner of burdening rate structures and allowable profit or margin. This last figure is determined by the contract type and the perceived risk of the contract goals to the contractor. The higher the risk, the higher the allowed margin or profit. The reverse applies as well.

Given this basis, the interplay between private entities and the public acquisition organizations, including the policy-setting staffs, are also of primary concern. Decision-makers, influences, and subject-matter experts from these entities participate together in what are ostensibly professional organizations, such as the National Defense Industrial Association (NDIA), the Project Management Institute (PMI), the College of Scheduling (CoS), the College of Performance Management (CPM), the International Council on Systems Engineering (INCOSE), the National Contract Management Association (NCMA), and the International Cost Estimating and Analysis Association (ICEAA), among the most frequently attended by these groups. Corresponding and associated private and professional groups are the Project Control Academy and the Association for Computing Machinery (ACM).

This list is by no means exhaustive, but from the perspective of suppliers to public agencies, NDIA, PMI, CoS, and CPM are of particular interest because much of the business of influencing policy and the details of its application are accomplished here. In this manner, the interests of the participants from the corporate side of the equation relate to those areas always of concern: business certainty, minimization of oversight, market and government influence. The market for several years now has been reactive, not proactive.

There is no doubt that business organizations from local Chambers of Commerce to specialized trade groups that bring with them the advantages of finding mutual interests and synergy. All also come with the ills and dysfunction, to varying degrees, borne from self-promotion, glad-handing, back-scratching, and ossification.

In groups where there is little appetite to upend the status quo, innovation and change, is viewed with suspicion and as being risky. In such cases the standard reaction is cognitive dissonance. At least until measures can be taken to subsume or control the pace and nature of the change. This is particularly true in the area of project and program management in general and integrated project, program and portfolio management (IPPM), in particular.

Absent the appetite on the part of DoD to replicate market forces that drive the acceptance of innovative IPPM approaches, one large event and various evolutionary aviation and space technology trends have upended the ecosystem of rent-seeking, reaction, and incumbents bent on maintaining the status quo.

The one large event, of course, came about from the changes wrought by the Covid pandemic. The other, evolutionary changes, are a result of the acceleration of software technology in capturing and transforming big(ger) dataset combined with open business intelligence systems that can be flexibly delivered locally and via the Cloud.

I also predict that these changes will make hard-coded, purpose-driven niche applications obsolete within the next five years, as well as those companies that have built their businesses around delivering custom, niche applications, and MS Excel spreadsheets, and those core companies that are comfortable suboptimizing and reacting to delivering the letter, if not the spirit, of good business practice expected under their contracts.

Walking hand-in-hand with these technological and business developments, the business of the aerospace, space and defense market, in general, is facing a window opening for new entries and greater competition borne of emergent engineering and technological exigencies that demand innovation and new approaches to old, persistent problems.

The coronavirus pandemic and new challenges from the realities of global competition, global warming, geopolitical rivalries; aviation, space and atmospheric science; and the revolution in data capture, transformation, and optimization are upending a period of quiescence and retrenchment in the market. These factors are moving the urgency of innovation and change to the left both rapidly and in a disruptive manner that will only accelerate after the immediate pandemic crisis passes.

In my studies of Toynbee and other historians (outside of my day job, I am also credentialed in political science and history, among other disciplines, through both undergraduate and graduate education), I have observed that societies and cultures that do not embrace the future and confront their challenges effectively, and that do not do so in a constructive manner, find themselves overrun by it and them. History is the chronicle of human frailty, tragedy, and failure interspersed by amazing periods of resilience, human flourishing, advancement, and hope.

As it relates to our more prosaic concerns, Deloitte has published an insightful paper on the 2021 industry outlook. Among the identified short-term developments are:

  1. A slow recovery in passenger travel may impact aircraft deliveries and industry revenues in commercial aviation,
  2. The defense sector will remain stable as countries plan to sustain their military capabilities,
  3. Satellite broadband, space exploration and militarization will drive growth,
  4. Industry will shift to transforming supply chains into more resilient and dynamic networks,
  5. Merger and acquisitions are likely to recover in 2021 as a hedge toward ensuring long-term growth and market share.

More importantly, the longer-term changes to the industry are being driven by the following technological and market changes:

  • Advanced aerial mobility (AAM). Both FAA and NASA are making investments in this area, and so the opening exists for new entries into the market, including new entries in the supply chain, that will disrupt the giants (absent a permissive M&A stance under the new Administration in Washington). AAM is the new paradigm to introduce safe, short-distance, daily-commute flying technologies using vertical lift.
  • Hypersonics. Given the touted investment of Russia and China into this technology as a means of leveraging against the power projection of U.S. forces, particularly its Navy and carrier battle groups (aside from the apparent fact that Vladimir Putin, the president of Upper Volta with Missiles and Hackers, really hates Disney World), the DoD is projected to fast-track hypersonic capabilities and countermeasures.
  • Electric propulsion. NASA is investing in cost-sharing capabilities to leverage electric propulsion technologies, looking to benefit from the start-up growth in this sector. This is an exciting development which has the potential to transform the entire industry over the next decade and after.
  • Hydrogen-powered aircraft. OEMs are continuing to pour private investment money into start-ups looking to introduce more fuel-efficient and clean energy alternatives. As with electric propulsion, there are prototypes of these aircraft being produced and as public investments into cost-sharing and market-investment strategies take hold, the U.S., Europe, and Asia are looking at a more diverse and innovative aerospace, space, and defense market.

Given the present condition of the industry, and the emerging technological developments and resulting transformation of flight, propulsion, and fuel sources, the concept and definitions used in project and program management require a revision to meet the exigencies of the new market.

For both industry and government, in order to address these new developments, I believe that a new language is necessary, as well as a complete revision to what is considered to be the acceptable baseline of best business practice and the art of the possible. Only then will organizations and companies be positioned to address the challenges these new forms of investment and partnering systems will raise.

The New Language of Integrated Program, Project, and Portfolio Management (IPPM).

First a digression to the past: while I was on active duty in the Navy, near the end of my career, I was assigned to the staff of the Office of the Undersecretary of Defense for Acquisition and Technology (OUSD(A&T)). Ostensibly, my assignment was to give me a place to transition from the Service. Thus, I followed the senior executive, who was PEO(A) at NAVAIR, to the Pentagon, simultaneously with the transition of NAVAIR to Patuxent River, Maryland. In reality, I had been tasked by the senior executive, Mr. Dan Czelusniak, to explore and achieve three goals:

  1. To develop a common schema by supporting an existing contract for the collection of data from DoD suppliers from cost-plus R&D contracts with the goal in mind of creating a master historical database of contract performance and technological development risk. This schema would first be directed to cost performance, or EVM;
  2. To continue to develop a language, methodology, and standard, first started and funded by NAVAIR, for the integration of systems engineering and technical performance management into the program management business rhythm;
  3. To create and define a definition of Integrated Program Management.

I largely achieved the first two during my relatively brief period there.

The first became known and the Integrated Digital Environment (IDE), which was refined and fully implemented after my departure from the Service. Much of this work is the basis for data capture, transformation, and load (ETL) today. There had already been a good deal of work by private individuals, organizations, and other governments in establishing common schemas, which were first applied to the transportation and shipping industries. But the team of individuals I worked with were able to set the bar for what followed across datasets.

The second was completed and turned over to the Services and federal agencies, many of whom adopted the initial approach, and refined it as well to inform, through the identification of technical risk, cost performance and technical achievement. Much of this knowledge already existed in the Systems Engineering community, but working with INCOSE, a group of like-minded individuals were able to take the work from the proof-of-concept, which was awarded the Acker in Skill in Communication award at the DAU Acquisition Research Symposium, and turn it into the TPM and KPP standard used by organizations today.

The third began with establishing my position, which hadn’t existed until my arrival: Lead Action Officer, Integrated Program Management. Gary Christle, who was the senior executive in charge of the staff, asked me “What is Integrated Program Management?” I responded: “I don’t know, sir, but I intend to find out.” Unfortunately, this is the initiative that has still eluded both industry and government, but not without some advancement.

Note that this position with its charter to define IPM was created over 24 years ago—about the same time it takes, apparently, to produce an operational fighter jet. I note this with no flippancy, for I believe that the connection is more than just coincidental.

When spoken of, IPM and IPPM are oftentimes restricted to the concept of cost (read cost performance or EVM) and schedule integration, with aggregated portfolio organization across a selected number of projects thrown in, in the latter case. That was considered advancement in 1997. But today, we seem to be stuck in time. In light of present technology and capabilities, this is a self-limiting concept.

This concept is technologically supported by a neutral schema that is authored and managed by DoD. While essential to data capture and transformation—and because of this fact—it is currently the target by incumbents as a means of further limiting even this self-limited definition in practice. It is ironic that a technological advance that supports data-driven in lieu of report-driven information integration is being influenced to support the old paradigm.

The motivations are varied: industry suppliers who aim to restrict access to performance data under project and program management, incumbent technology providers who wish to keep the changes in data capture and transformation restricted to their limited capabilities, consulting companies aligned with technology incumbents, and staff augmentation firms dependent on keeping their customers dependent on custom application development and Excel workbooks. All of these forces work through the various professional organizations which work to influence government policy, hoping to establish themselves as the arbiters of the possible and the acceptable.

Note that oftentimes the requirements under project management are often critiqued under the rubric of government regulation. But that is a misnomer: it is an extension of government contract management. Another critique is made from the perspective of overhead costs. But management costs money, and one would not (or at least should not) drive a car or own a house without insurance and a budget for maintenance, much less a multi-year high-cost project involving the public’s money. In addition, as I have written previously which is supported by the literature, data-driven systems actually reduce costs and overhead.

All of these factors contribute to ossification, and impose artificial blinders that, absent reform, will undermine meeting the new paradigms of 21st Century project management, given that the limited concept of IPM was obviously insufficient to address the challenges of the transitional decade that broached the last century.

Embracing the Future in Aerospace, Space, and Defense

As indicated, the aerospace and space science and technology verticals are entering a new and exciting phase of technological innovation resulting from investments in start-ups and R&D, including public-private cost-sharing arrangements.

  1. IPM to Project Life-Cycle Management. Given the baggage that attends the acronym IPM, and the worldwide trend to data-driven decision-making, it is time to adjust the language of project and program management to align to it. In lieu of IPM, I suggest Project Life-Cycle Management to define the approach to project and program data and information management.
  2. Functionality-Driven to Data-Driven Applications. Our software, systems and procedures must be able to support that infrastructure and be similarly in alignment with that manner of thinking. This evolution includes the following attributes:
    • Data Agnosticism. As our decision-making methods expand to include a wider, deeper, and more comprehensive interdisciplinary approach, our underlying systems must be able to access data in this same manner. As such, these systems must be data agnostic.
    • Data neutrality. In order to optimize access to data, the overhead and effort needed to access data must be greatly reduced. Using data science and analysis to restructure pre-conditioned data in order to overcome proprietary lexicons—an approach used for business intelligence systems since the 1980s—provides no added value to either the data or the organization. If data access is ad hoc and customized in every implementation, the value of the effort cannot either persist, nor is the return on investment fully realized. It backs the customer into a corner in terms of flexibility and innovation. Thus, pre-configured data capture, extract, transformation, and load (ETL) into a non-proprietary and objective format, which applies to all data types used in project and program management systems, is essential to providing the basis for a knowledge-based environment that encourages discovery from data. This approach in ETL is enhanced by the utilization of neutral data schemas.
    • Data in Lieu of Reporting and Visualization. No doubt that data must be visualized at some point—preferably after its transformation and load into the database with other, interrelated data elements that illuminate information to enhance the knowledge of the decisionmaker. This implies that systems that rely on physical report formats, charts, and graphs as the goal are not in alignment with the new paradigm. Where Excel spreadsheets and PowerPoint are used as a management system, it is the preparer is providing the interpretation, in a manner that predisposes the possible alternatives of interpretation. The goal, instead, is to have data speak for itself. It is the data, transformed into information, interrelated and contextualized to create intelligence that is the goal.
    • All of the Data, All of the Time. The cost of 1TB of data compared to 1MB of data is the marginal cost of the additional electrons to produce it. Our systems must be able to capture all of the data essential to effective decision-making in the periodicity determined by the nature of the data. Thus, our software systems must be able to relate data at all levels and to scale from simplistic datasets to extremely large ones. It should do so in such a way that the option for determining what, among the full menu of data options available, is relevant rests in the consumer of that data.
    • Open Systems. Software solution providers beginning with the introduction of widespread CPU capability have manufactured software to perform particular functions based on particular disciplines and very specific capabilities. As noted earlier, these software applications are functionality-focused and proprietary in structure, method, and data. For data-driven project and program requirements, software systems must be flexible enough to accommodate a wide range of analytical and visualization demands in allowing the data to determine the rules of engagement. This implies systems that are open in two ways: data agnosticism, as already noted, but also open in terms of the user environment.
    • Flexible Application Configuration. Our systems must be able to address the needs of the various disciplines in their details, while also allowing for integration and contextualization of interrelated data across domains. As with Open Systems to data and the user environment, openness through the ability to roll out multiple specialized applications from a common platform places the subject matter expert and program manager in the driver’s seat in terms of data analysis and visualization. An effective open platform also reduces the overhead associated with limited purpose-driven, disconnected and proprietary niche applications.
    • No-Code/Low-Code. Given that data and the consumer will determine both the source and method of delivery, our open systems should provide an environment that supports Agile development and deployment of customization and new requirements.
    • Knowledge-Based Content. Given the extensive amount of experience and education recorded and documented in the literature, our systems must, at the very least, provide a baseline of predictive analytics and visualization methods usually found in the more limited, purpose-built hardcoded applications, if not more expansive. This knowledge-based content, however, must be easily expandable and refinable, given the other attributes of openness, flexibility, and application configuration. In this manner, our 21st century project and program management systems must possess the attributes of a hybrid system: providing the functionality of the traditional niche systems with the flexibility and power of a business intelligence system enhanced by COTS data capture and transformation.
    • Ease of Use. The flexibility and power of these systems must be such that implementation and deployment are rapid, and that new user environment applications can be quickly deployed. Furthermore, the end user should be able to determine the level of complexity or simplicity of the environment to support ease of use.
  1. Focus on the Earliest Indicator. A good deal of effort since the late 1990s has been expended on defining the highest level of summary data that is sufficient to inform earned value, with schedule integration derived from the WBS, oftentimes summarized on a one-to-many basis as well. This perspective is biased toward believing that cost performance is the basis for determining project control and performance. But even when related to cost, the focus is backwards. The project lifecycle in its optimized form exists of the following progression:

    Project Goals and Contract (framing assumptions) –> Systems Engineering, CDRLs, KPPs, MoEs, MoPs, TPMs –> Project Estimate –> Project Plan –> IMS –> Risk and Uncertainty Analysis –> Financial Planning and Execution –> PMB –> EVM

    As I’ve documented in this blog over the years, DoD studies have shown that, while greater detail within the EVM data may not garner greater early warning, proper integration with the schedule at the work package level does. Program variances first appear in the IMS. A good IMS, thus, is key to collecting and acting as the main execution document. This is why many program managers who are largely absent in the last decade or so from the professional organizations listed, tend to assert that EVM is like “looking in the rearview mirror.” It isn’t that it is not essential, but it is true that it is not the earliest indicator of variances from expected baseline project performance.

    Thus, the emphasis going forward under this new paradigm is not to continue the emphasis and a central role for EVM, but a shift to the earliest indicator for each aspect of the program that defines its framing assumptions.
  1. Systems Engineering: It’s not Space Science, it’s Space Engineering, which is harder.
    The focus on start-up financing and developmental cost-sharing shifts the focus to systems engineering configuration control and technical performance indicators. The emphasis on meeting expectations, program goals, and achieving milestones within the cost share make it essential to be able to identify fatal variances, long before conventional cost performance indicators show variances. The concern of the program manager in these cases isn’t so much on the estimate at complete, but whether the industry partner will be able to deploy the technology within the acceptable range of the MoEs, MoPs, TPPs, and KPPs, and not exceed the government’s portion of the cost share. Thus, the incentive is to not only identify variances and unacceptable risk at the earliest indicator, but to do so in terms of whether the end-item technology will be successfully deployed, or whether the government should cut its losses.
  1. Risk and Uncertainty is more than SRA. The late 20th century approach to risk management is to run a simulated Monte Carlo analysis against the schedule, and to identify alternative critical paths and any unacceptable risks within the critical path. This is known as the schedule risk analysis, or SRA. While valuable, the ratio of personnel engaged in risk management is much smaller than the staffs devoted to schedule and cost analysis.

    This is no doubt due to the specialized language and techniques devoted to risk and uncertainty. This segregation of risk from mainstream project and program analysis has severely restricted both the utility and the real-world impact of risk analysis on program management decision-making.

    But risk and uncertainty extend beyond the schedule risk analysis, and their utility in an environment of aggressive investment in new technology, innovation, and new entries to the market will place these assessments at center stage. In reality, our ability to apply risk analysis techniques extends to the project plan, to technical performance indicators, to estimating, to the integrated master schedule (IMS), and to cost, both financial and from an earned value perspective. Combined with the need to identify risk and major variances using the earliest indicator, risk analysis becomes pivotal to mainstream program analysis and decision-making.

Conclusions from Part Two

The ASD industry is most closely aligned with PPM in the public interest. Two overarching trends that are transforming this market that are overcoming the inertia and ossification of PPM thought are the communications and information systems employed in response to the coronavirus pandemic, which opened pathways to new ways of thinking about the status quo, and the start-ups and new entries into the ASD market, borne from the investments in new technologies arising from external market, geo-political, space science, global warming, and propulsion trends, as well as new technologies and methods being employed in data and information technology that drive greater efficiency and productivity. These changes have forced a new language and new expectations as to the art of the necessary, as well as the art of the possible, for PPM. This new language includes a transition to the concept of the optimal capture and use of all data across the program management life cycle with greater emphasis on systems engineering, technical performance, and risk.

Having summarized the new program paradigm in Aerospace, Space, and Defense, my next post will assess the characteristics of program management in various commercial industries, the rising trends in these verticals, and what that means for the project and program management discipline.

Open: Strategic Planning, Open Data Systems, and the Section 809 Panel

Sundays are usually days reserved for music and the group Rhye was playing in the background when this topic came to mind.

I have been preparing for my presentation in collaboration with my Navy colleague John Collins for the upcoming Integrated Program Management Workshop in Baltimore. This presentation will be a non-proprietary/non-commercial talk about understanding the issue of unlocking data to support national defense systems, but the topic has broader interest.

Thus, in advance of that formal presentation in Baltimore, there are issues and principles that are useful to cover, given that data capture and its processing, delivery, and use is at the heart of all systems in government, and private industry and organizations.

Top Data Trends in Industry and Their Relationship to Open Data Systems

According to Shohreh Gorbhani, Director, Project Control Academy, the top five data trends being pursued by private industry and technology companies. My own comments follow as they relate to open data systems.

  1. Open Technologies that transition from 2D Program Management to 3D and 4D PM. This point is consistent with the College of Performance Management’s emphasis on IPM, but note that the stipulation is the use of open technologies. This is an important distinction technologically, and one that I will explore further in this post.
  2. Real-time Data Capture. This means capturing data in the moment so that the status of our systems is up-to-date without the present delays associated with manual data management and conditioning. This does not preclude the collection of structured, periodic data, but also does include the capture of transactions from real-time integrated systems where appropriate.
  3. Seamless Data Flow Integration. From the perspective of companies in manufacturing and consumer products, technologies such as IoT and Cloud are just now coming into play. But, given the underlying premises of items 1 and 2, this also means the proper automated contextualization of data using an open technology approach that flows in such a way as to be traceable.
  4. The use of Big Data. The term has lost a good deal of its meaning because of its transformation into a buzz-phrase and marketing term. But Big Data refers to the expansion in the depth and breadth of available data driven by the economic forces that drive Moore’s Law. What this means is that we are entering a new frontier of data processing and analysis that will, no doubt, break down assumptions regarding the validity and strength of certain predictive analytics. The old assumptions that restrict access to data due to limitations of technology and higher cost no longer apply. We are now in the age of Knowledge Discovery in Data (KDD). The old approach of reporting assumed that we already know what we need to know. The use of data challenges old assumptions and allows us to follow the data where it will lead us.
  5. AI Forecasting and Analysis. No doubt predictive AI will be important as we move forward with machine learning and other similar technologies. But this infant is not yet a rug rat. The initial experiences with AI are that they tend to reflect the biases of the creators. The danger here is that this defeats KDD, which results in stagnation and fugue. But there are other areas where AI can be taught to automate mundane, value-neutral tasks relating to raw data interpretation.

The 809 Panel Recommendation

The fact that industry is the driving force behind these trends that will transform the way that we view information in our day-to-day work, it is not surprising that the 809 Panel had this to say about existing defense business systems:

“Use existing defense business system open-data requirements to improve strategic decision making on acquisition and workforce issues…. DoD has spent billions of dollars building the necessary software and institutional infrastructure to collect enterprise wide acquisition and financial data. In many cases, however, DoD lacks the expertise to effectively use that data for strategic planning and to improve decision making. Recommendation 88 would mitigate this problem by implementing congressional open-data mandates and using existing hiring authorities to bolster DoD’s pool of data science professionals.”

Section 809 Volume 3, Section 9, p. 477

At one point in my military career, I was assigned as the Materiel, Fuels, and Transportation Officer of Naval Air Station, Norfolk. As a major naval air base, transportation hub, and home to a Naval Aviation Depot, we shipped and received materiel and supplies across the world. In doing so, our transportation personnel would use what at the time was new digital technology to complete an electronic bill of lading that specified what and when items were being shipped, the common or military carrier, the intended recipient, and the estimated date of arrival, among other essential information.

The customer and receiving end of this workflow received an open systems data file that contained these particulars. The file was an early version of open data known as an X12 file, for which the commercial transportation industry was an early adopter. Shipping and receiving activities and businesses used their own type of local software: and there were a number of customized and commercial choices out there, as well as those used by common carriers such various trucking and shipping firms, the USPS, FEDEX, DHS, UPS, and others. The X12 file was the DMZ that made the information open. Software manufacturers, if they wanted to stay relevant in the market, could not impose a proprietary data solution.

Furthermore, standardization of terminology and concepts ensured that the information was readable and comprehensible wherever the items landed–whether across receiving offices in the United States, Japan, Europe, or even Istanbul. Understanding that DoD needs the skillsets to be able to optimize data, it didn’t require an army of data scientists to achieve this end-state. It required the right data science expertise in the right places, and the dictates of transportation consumers to move the technology market to provide the solution.

Over the years both industry and government have developed a number of schema standards focused on specific types of data, progressing from X12 to XML and now projected to use JSON-based schemas. Each of them in their initial iterations automated the submission of physical reports that had been required by either by contract or operations. These focused on a small subset of the full dataset relating to program management and project controls.

This progression made sense.

When digitized technology is first introduced into an intensive direct-labor environment, the initial focus is to automate the production of artifacts and their underlying processes in order to phase in the technology’s acceptance. This also allows the organization to realize immediate returns on investment and improvements in productivity. But this is the first step, not the final one.

Currently for project controls the current state is the UN/CEFACT XML for program performance management data, and the contract cost and labor data collection file known as the FlexFile. Clearly the latter file, given that the recipient is the Office of the Secretary of Defense Cost Assessment and Program Evaluation (OSD CAPE), establish it as one of many feedback loops that support that office’s role in coordinating the planning, programming, budgeting, and evaluation (PPBE) system related to military strategic investments and budgeting, but only one. The program performance information is also a vital part of the PPBE process in evaluation and in future planning.

For most of the U.S. economy, market forces and consumer requirements are the driving force in digital innovation. The trends noted by Ms. Gorbhani can be confirmed through a Google search of any one of the many technology magazines and websites that can be found. The 809 Panel, drawn as it was from specialists and industry and government, were tasked “to provide recommendations that would allow DoD to adapt and deliver capability at market speeds, while ensuring that DoD remains true to its commitment to promote competition, provide transparency in its actions, and maintain the integrity of the defense acquisition system.”

Given that the work of the DoD is unique, creating a type of monopsony, it is up to leadership within the Department to create the conditions and mandates necessary to recreate in microcosm the positive effects of market forces. The DoD also has a very special, vital mission in defending the nation.

When an individual business cobbles together its mission statement it is that mission that defines the necessary elements in data collection that are then essential in making decisions. In today’s world, best commercial sector practice is to establish a Master Data Management (MDM) approach in defining data requirements and practices. In the case of DoD, a similar approach would be beneficial. Concurrent with the period of the 809 Panel’s efforts, RAND Corporation delivered a paper in 2017 (link in the previous sentence) that made recommendations related to data governance that are consistent with the 809 Panel’s recommendations. We will be discussing these specific recommendations in our presentation.

Meeting the mission and readiness are the key components to data governance in DoD. Absent such guidance, specialized software solution providers, in particular, will engage in what is called “rent-seeking” behavior. This is an economic term that means that an “entity (that) seeks to gain added wealth without any reciprocal contribution of productivity.”

No doubt, given the marketing of software solution providers, it is hard for decision-makers to tell what constitutes an open data system. The motivation of a software solution is to make itself as “sticky” as possible and it does that by enticing a customer to commit to proprietary definitions, structures, and database schemas. Usually there are “black-boxed” portions of the software that makes traceability impossible and that complicates the issue of who exactly owns the data and the ability of the customer to optimize it and utilize it as the mission dictates.

Furthermore, data visualization components like dashboards are ubiquitous in the market. A cursory stroll through a tradeshow looks like a dashboard smorgasbord combined with different practical concepts of what constitutes “open” and “integration”.

As one DoD professional recently told me, it is hard to tell the software systems apart. To do this it is necessary to understand what underlies the software. Thus, a proposed honest-broker definition of an open data system is useful and the place to start, given that this is not a notional concept since such systems have been successfully been established.

The Definition of Open Data Systems

Practical experience in implementing open data systems toward the goal of optimizing essential information from our planning, acquisition, financial, and systems engineering systems informs the following proposed definition, which is based on commercial best practice. This proposal is also based on the principle that the customer owns the data.

  1. An open data system is one based on non-proprietary neutral schemas that allow for the effective capture of all essential elements from third-party proprietary and customized software for reporting and integration necessary to support both internal and external stakeholders.
  2. An open data system allows for complete traceability and transparency from the underlying database structure of the third-party software data, through the process of data capture, transformation, and delivery of data in the neutral schema.
  3. An open data system targets the loading of the underlying source data for analysis and use into a neutral database structure that replicates the structure of the neutral schema. This allows for 100% traceability and audit of data elements received through the neutral schema, and ensures that the receiving organization owns the data.

Under this definition, data from its origination to its destination is more easily validated and traced, ensuring quality and fidelity, and establishing confidence in its value. Given these characteristics, integration of data from disparate domains becomes possible. The tracking of conflicting indicators is mitigated, since open system data allows for its effective integration without the bias of proprietary coding or restrictions on data use. Finally, both government and industry will not only establish ownership of their data–a routine principle in commercial business–but also be free to utilize new technologies that optimize the use of that data.

In closing, Gahan Wilson, a cartoonist whose work appeared in National Lampoon, The New Yorker, Playboy, and other magazines recently passed.

When thinking of the barriers to the effective use of data, I came across this cartoon in The New Yorker:

Open Data is the key to effective integration and reporting–to the optimal use of information. Once mandated and achieved, our defense and business systems will be better informed and be able to test and verify assumed knowledge, address risk, and eliminate dogmatic and erroneous conclusions. Open Data is the driver of organizational transformation keyed to the effective understanding and use of information, and all that entails. Finally, Open Data is necessary to the mission and planning systems of both industry and the U.S. Department of Defense.

Sledgehammer: Pisano Talks!

My blogging hiatus is coming to an end as I take a sledgehammer to the writer’s block wall.

I’ve traveled far and wide over the last six months to various venues across the country and have collected a number of new and interesting perspectives on the issues of data transformation, integrated project management, and business analytics and visualization. As a result, I have developed some very strong opinions regarding the trends that work and those that don’t regarding these topics and will be sharing these perspectives (with the appropriate supporting documentation per usual) in following posts.

To get things started this post will be relatively brief.

First, I will be speaking along with co-presenter John Collins, who is a Senior Acquisition Specialist at the Navy Engineering & Logistics Office, at the Integrated Program Management Workshop at the Hyatt Regency in beautiful downtown Baltimore’s Inner Harbor 10-12 December. So come on down! (or over) and give us a listen.

The topic is “Unlocking Data to Improve National Defense Systems”. Today anyone can put together pretty visualizations of data from Excel spreadsheets and other sources–and some have made quite a bit of money doing so. But accessing the right data at the right level of detail, transforming it so that its information content can be exploited, and contextualizing it properly through integration will provide the most value to organizations.

Furthermore, our presentation will make a linkage to what data is necessary to national defense systems in constructing the necessary artifacts to support the Department of Defense’s Planning, Programming, Budgeting and Execution (PPBE) process and what eventually becomes the Future Years Defense Program (FYDP).

Traditionally information capture and reporting has been framed as a question of oversight, reporting, and regulation related to contract management, capital investment cost control, and DoD R&D and acquisition program management. But organizations that fail to leverage the new powerful technologies that double processing and data storage capability every 18 months, allowing for both the depth and breadth of data to expand exponentially, are setting themselves up to fail. In national defense, this is a condition that cannot be allowed to occur.

If DoD doesn’t collect this information, which we know from the reports of cybersecurity agencies that other state actors are collecting, we will be at a serious strategic disadvantage. We are in a new frontier of knowledge discovery in data. Our analysts and program managers think they know what they need to be viewing, but adding new perspectives through integration provide new perspectives and, as a result, will result in new indicators and predictive analytics that will, no doubt, overtake current practice. Furthermore, that information can now be processed and contribute more, timely, and better intelligence to the process of strategic and operational planning.

The presentation will be somewhat wonky and directed at policymakers and decisionmakers in both government and industry. But anyone can play, and that is the cool aspect of our community. The presentation will be non-commercial, despite my day job–a line I haven’t crossed up to this point in this blog, but in this latter case will be changing to some extent.

Back in early 2018 I became the sole proprietor of SNA Software LLC–an industry technology leader in data transformation–particularly in capturing datasets that traditionally have been referred to as “Big Data”–and a hybrid point solution that is built on an open business intelligence framework. Our approach leverages the advantages of COTS (delivering the 80% solution out of the box) with open business intelligence that allows for rapid configuration to adapt the solution to an organization’s needs and culture. Combined with COTS data capture and transformation software–the key to transforming data into information and then combining it to provide intelligence at the right time and to the right place–the latency in access to trusted intelligence is reduced significantly.

Along these lines, I have developed some very specific opinions about how to achieve this transformation–and have put those concepts into practice through SNA and delivered those solutions to our customers. Thus, the result has been to reduce both the effort and time to capture large datasets from data that originates in pre-processed data, and to eliminate direct labor and the duration to information delivery by more than 99%. The path to get there is not to apply an army of data scientists and data analysts that deals with all data as if it is flat and to reinvent the wheel–only to deliver a suboptimized solution sometime in the future after unnecessarily expending time and resources. This is a devolution to the same labor-intensive business intelligence approaches that we used back in the 1980s and 1990s. The answer is not to throw labor at data that already has its meaning embedded into its information content. The answer is to apply smarts through technology, and that’s what we do.

Further along these lines, if you are using hard-coded point solutions (also called purpose-built software) and knitted best-of-breed, chances are that you will find that you are poorly positioned to exploit new technology and will be obsolete within the next five years, if not sooner. The model of selling COTS solutions and walking away except for traditional maintenance and support is dying. The new paradigm will be to be part of the solution and that requires domain knowledge that translates into technology delivery.

More on these points in future posts, but I’ve placed the stake in the ground and we’ll see how they hold up to critique and comment.

Finally, I recently became aware of an extremely informative and cutting-edge website that includes podcasts from thought leaders in the area of integrated program management. It is entitled InnovateIPM and is operated and moderated by a gentleman named Rob Williams. He is a domain expert in project cost development, with over 20 years of experience in the oil, gas, and petrochemical industries. Robin has served in a variety of roles throughout his career and is now focuses on cost estimating and Front-End Loading quality assurance. His current role is advanced project cost estimator at Marathon Petroleum’s Galveston Bay Refinery in Texas City.

Rob was also nice enough to continue a discussion we started at a project controls symposium and interviewed me for a podcast. I’ll post additional information once it is posted.

Sunday Contemplation — Finding Wisdom: The Epimenides Paradox

The liar’s paradox, as it is often called, is a fitting subject for our time. For those not familiar with the paradox, it was introduced to me by the historian Gordon Prange when I was a young Navy enlisted man attending the University of Maryland. He introduced the paradox to me as a comedic rejoinder to the charge of a certain bias in history that he considered to be without merit. He stated it this way: “I heard from a Cretan that all Cretans are liars.”

The origin of this form of the liar’s paradox has many roots. It is discussed as a philosophical conundrum by Aristotle in ancient Greece as well as by Cicero in Rome. A version of it appears in the Christian New Testament and it was a source of study in Europe during the Middle Ages.

When I have introduced the paradox in a social setting and asked for a resolution to it by the uninitiated, usually a long conversation ensues. The usual approach is as a bi-polar proposition, accepting certain assumptions from the construction of the sentence, that is, if the Cretan is lying then all Cretans tell the truth which cannot be the case, but if the Cretan is telling the truth then he is lying, but he could not be telling the truth since all Cretans lie…and the circular contradiction goes on ad infinitum.

But there is a solution to the paradox and what it requires is thinking about the Cretan and breaking free of bi-polar thinking, which we often call, colloquially, “thinking in black and white.”

The solution.

The assumption in the paradox is that the Cretan in question can speak for all Cretans. This assumption could be false. Thus not all Cretans are liars and, thus, the Cretan in question is making a false statement. Furthermore, the Cretan making the assertion is not necessarily a liar–the individual could just be mistaken. We can test the “truthiness” of what the Cretan has said by testing other Cretans on a number of topics and seeing if they are simply ignorant, uninformed, or truly liars on all things.

Furthermore, there is a difference between something being a lie and a not-lie. Baked into our thinking by absolutist philosophies, ideologies, and religions is black and white thinking that clouds our judgement. A lie must have intent and be directed to misinform, misdirect, or to cloud a discussion. There are all kinds of lies and many forms of not-lies. Thus, the opposite of “all Cretans are liars” is not that “all Cretans are honest” but that “some Cretans are honest and some are not.”

If we only assume the original conclusion as being true, then this is truly a paradox, but it is not. If we show that Cretans do not lie all of the time then we are not required to reach the high bar that “all Cretans are honest”, simply that the Cretan making the assertion has made a false statement or is, instead, the liar.

In sum, our solution in avoiding falling into the thinking of the faulty or dishonest Cretan is not to accept the premises as they have been presented to us, but to use our ability to reason out the premises and to look at the world as it is as a “reality check.” The paradox is not truly a paradox, and the assertion is false.

(Note that I have explained this resolution without going into the philosophical details of the original syllogism, the mathematics, and an inquiry on the detailed assumptions. For a fuller discussion of liar’s paradoxes I recommend this link.)

Why Care About the Paradox?

We see versions of the paradox used all of the time. This includes the use of ad hominem attacks on people, that is, charges of guilt by association with an idea, a place, an ethnic group, or another person. “Person X is a liar (or his/her actions are suspect or cannot be trusted) because they adhere to Y idea, group, or place.” Oftentimes these attacks are joined with insulting or demeaning catchphrases and (especially racial or ethnic) slurs.

What we attribute to partisanship or prejudice or bias often uses this underlying type of thinking. It is a simplification born of ignorance and all simplifications are a form of evil in the world. This assertion was best articulated by Albert Camus in his book The Plague.

“The evil that is in the world always comes of ignorance, and good intentions may do as much harm as malevolence, if they lack understanding. On the whole, men are more good than bad; that, however, isn’t the real point. But they are more or less ignorant, and it is this that we call vice or virtue; the most incorrigible vice being that of an ignorance that fancies it knows everything and therefore claims for itself the right to kill. The soul of the murderer is blind; and there can be no true goodness nor true love without the utmost clear-sightedness.”

Our own times are not much different in its challenges than what Camus faced during the rise of fascism in Europe, for fascism’s offspring have given rise to a new generation that has insinuated itself into people’s minds.

Aside from my expertise in technology and the military arts and sciences, the bulk of my formal academic education is as an historian and political scientist. The world is currently in the grip of a plague that eschews education and Camus’ clear-sightedness in favor of materialism, ethnic hatred, nativisim, anti-intellectualism, and ideological propaganda.

History is replete with similar examples, both large and small, of this type of thinking which should teach us that this is an aspect of human character wired into our brains that requires eternal vigilance to guard against. Such examples as the Spanish Inquisition, the Reformation and Counter Reformation, the French Revolution, the defense of slavery in the American Civil War and the subsequent terror of Jim Crow, 18th and 19th century imperialism, apartheid after the Boer War, the disaster of the First World War, the Russian Revolutions, the history of anti-Jewish pogroms and the Holocaust, the rise of Fascism and Nazism, Stalinism, McCarthyism in the United States, Mao and China’s Cultural Revolution, Castro’s Cuba, Pinochet’s Chile, the Pathet Lao, the current violence and intolerance borne of religious fundamentalism–and the list can go on–teaches us that our only salvation and survival as a species lies in our ability to overcome ignorance and self-delusion.

We come upon more pedestrian examples of this thinking all of the time. As Joseph Conrad wrote in Heart of Darkness, “The mind of man is capable of anything—because everything is in it, all the past as well as all the future.”

We must perform this vigilance first on ourselves–and it is a painful process because it shatters the self-image that is necessary for us to continue from day-to-day: that narrative thread that connects the events of our existence and that guides our actions as best and in as limited ways that they can be guided, without falling into the abyss of nihilism. Only knowledge, and the attendant realization of the necessary components of human love, acceptance, empathy, sympathy, and community–that is understanding–the essential connections that make us human–can overcome the darkness that constantly threatens to envelope us. But there is something more.

The birth of the United States was born on the premise that the practical experiences of history and its excesses could be guarded against and such “checks and balances” would be woven, first, into the thread of its structure, and then, into the thinking of its people. This is the ideal, and it need not be said that, given that it was a construction of flawed men, despite their best efforts at education and enlightenment compared to the broad ignorance of their time, these ideals for many continued to be only that. This ideal is known as the democratic ideal.

Semantics Matter

It is one that is under attack as well. We often hear the argument against it dressed up in academic clothing as being “only semantics” on the difference between a republic and a democracy. But as I have illustrated  regarding the Epimenides Paradox, semantics matter.

For the democratic ideal is about self-government, which was a revolutionary concept in the 18th century and remains one today, which is why it has been and continues to be under attack by authoritarians, oligarchs, dictators, and factions pushing their version of the truth as they define it. But it goes further than than a mechanical process of government.

The best articulation of democracy in its American incarnation probably was written by the philosopher and educator John Dewey in his essay On Democracy. Democracy, says Dewey, is more than a special political form: it is a way of life, social and individual, that allows for the participation of every mature human being in forming the values that regulate society toward the twin goals of ensuring the general social welfare and full development of human beings as individuals.

While what we call intelligence be distributed in unequal amounts, it is the democratic faith that it is sufficiently general so that each individual has something to contribute, whose value can be assessed only as enters into the final pooled intelligence constituted by the contributions of all. Every authoritarian scheme, on the contrary, assumes that its value may be assessed by some prior principle, if not of family and birth or race and color or possession of material wealth, then by the position and rank a person occupies in the existing social scheme. The democratic faith in equality is the faith that each individual shall have the chance and opportunity to contribute whatever he is capable of contributing and that the value of his contribution be decided by its place and function in the organized total of similar contributions, not on the basis of prior status of any kind whatever.

In such a society there is no place for “I heard from a Cretan that all Cretans lie.” For democracy to work, however, requires not only vigilance but a dedication to education that is further dedicated to finding knowledge, however inconvenient or unpopular that knowledge may turn out to be. The danger has always been in lying to ourselves, and allowing ourselves to be seduced by good liars.

Note: This post has been updated for grammar and for purposes of clarity from the original.

You Know I’m No Good: 2016 Election Polls and Predictive Analytics

While the excitement and emotions of this past election work themselves out in the populace at large, as a writer and contributor to the use of predictive analytics, I find the discussion about “where the polls went wrong” to be of most interest.  This is an important discussion, because the most reliable polling organizations–those that have proven themselves out by being right consistently on a whole host of issues since most of the world moved to digitization and the Internet of Things in their daily lives–seemed to be dead wrong in certain of their predictions.  I say certain because the polls were not completely wrong.

For partisans who point to Brexit and polling in the U.K., I hasten to add that this is comparing apples to oranges.  The major U.S. polling organizations that use aggregation and Bayesian modeling did not poll Brexit.  In fact, there was one reliable U.K. polling organization that did note two factors:  one was that the trend in the final days was toward Brexit, and the other is that the final result was based on turnout, where greater turnout favored the “stay” vote.

But aside from these general details, this issue is of interest in project management because, unlike national and state polling, where there are sufficient numbers to support significance, at the micro-microeconomic level of project management we deal with very small datasets that expand the range of probable results.  This is not an insignificant point that has been made time-and-time again over the years, particularly given single-point estimates using limited time-phased data absent a general model that provides insight into what are the likeliest results.  This last point is important.

So let’s look at the national polls on the eve of the election according to RealClear.  IBD/TIPP Tracking had it Trump +2 at +/-3.1% in a four way race.  LA Times/USC had it Trump +3 at the 95% confidence interval, which essentially means tied.  Bloomberg had Clinton +3, CBS had Clinton +4, Fox had Clinton +4, Reuters/Ipsos had Clinton +3, ABC/WashPost, Monmouth, Economist/YouGov, Rasmussen, and NBC/SM had Clinton +2 to +6.  The margin for error for almost all of these polls varies from +/-3% to +/-4%.

As of this writing Clinton sits at about +1.8% nationally, the votes are still coming in and continue to confirm her popular vote lead, currently standing at about 300,000 votes.  Of the polls cited, Rasmussen was the closest to the final result.  Virtually every other poll, however, except IBD/TIPP, was within the margin of error.

The polling that was off in predicting the election were those that aggregated polls along with state polls, adjusted polling based on non-direct polling indicators, and/or then projected the chances of winning based on the probable electoral vote totals.  This is where things were off.

Among the most popular of these sites is Nate Silver’s FiveThirtyEight blog.  Silver established his bonafides in 2008 by picking winners with incredible accuracy, particularly at the state level, and subsequently in his work at the New York Times which continued to prove the efficacy of data in predictive analytics in everything from elections to sports.  Since that time his significant reputation has only grown.

What Silver does is determine the probability of an electoral outcome by using poll results that are transparent in their methodologies and that have a high level of confidence.  Silver’s was the most conservative of these types of polling organizations.  On the eve of the election Silver gave Clinton a 71% chance of winning the presidency. The other organizations that use poll aggregation, poll normalization, or other adjusting indicators (such as betting odds, financial market indicators, and political science indicators) include the New York Times Upshot (Clinton 85%), HuffPost (Clinton 98%), PredictWise (Clinton 89%), Princeton (Clinton >99%), DailyKos (Clinton 92%), Cook (Lean Clinton), Roth (Lean Clinton), and Sabato (Lean Clinton).

In order to understand what probability means in this context, the polls were using both bottom-up state polling to track the electoral college combined with national popular vote polling.  But keep in mind that, as Nate Silver wrote over the course of the election, that just a 17% chance of winning “is the same as your chances of losing a “game” of Russian roulette”.  Few of us would take that bet, particularly since the result of losing the game is finality.

Still, except for FiveThirtyEight, none of the other methods using probability got it right.  None, except FiveThirtyEight, left enough room for drawing the wrong chamber.  Also, in fairness, the Cook, Rothenberg, and Sabato projections also left enough room to see a Trump win if the state dominoes fell right.

The place that the models failed were in the states of Florida, North Carolina, Pennsylvania, Michigan, and Wisconsin.  In particular, even with Florida (result Trump +1.3%) and North Carolina (result Trump +3.8%), Trump would not win if Pennsylvania (result Trump +1.2%), Michigan (result Trump +.3), and Wisconsin (result Trump +1.0)–supposed Clinton firewall states–were not breached.  So what happened?

Among the possible factors are the effect of FBI Director Comey’s public intervention, which was too soon to the election to register in the polling; ineffective polling methods in rural areas (garbage in-garbage out), bad state polling quality, voter suppression, purging, and restrictions (of the battleground states this includes Florida, North Carolina, Wisconsin, Ohio, and Iowa), voter turnout and enthusiasm (aside from the factors of voter suppression), and the inability to peg the way the high level of undecided voters would go at the last minute.

In hindsight, the national polls were good predictors.  The sufficiency of the data in drawing significance, and the high level of confidence in their predictive power is borne out by the final national vote totals.

I think that where the polling failed in the projections of the electoral college was from the inability to take into account non-statistical factors, selection bias, and that the state poll models probably did not accurately reflect the electorate in the states given the lessons from the primaries.  Along these lines, I believe that if pollsters look at the demographics in the respective primaries that they will find that both voter enthusiasm and composition provide the corrective to their projections. Given these factors, the aggregators and probabilistic models should all have called the race too close to call.  I think both Monte Carlo and Bayesian methods in simulations will bear this out.

For example, as one who also holds a political science degree and so will put on that hat.  It is a basic tenet that negative campaigns depress voter participation.  This causes voters to select the lesser of two evils (or lesser of two weevils).  Voter participation was down significantly due to a unprecedentedly negative campaign.  When this occurs, the most motivated base will usually determine the winner in an election.  This is why midterm elections are so volatile, particularly after a presidential win that causes a rebound of the opposition party.  Whether this trend continues with the reintroduction of gerrymandering is yet to be seen.

What all this points to from a data analytics perspective is that one must have a model to explain what is happening.  Statistics by themselves, while correct a good bit of the time, will cause one to be overconfident of a result based solely on the numbers and simulations that give the false impression of solidity, particularly when one is in a volatile environment.  This is known as reification.  It is a fallacious way of thinking.  Combined with selection bias and the absence of a reasonable narrative model–one that introduces the social interactions necessary to understand the behavior of complex adaptive systems–one will often find that invalid results result.

The Revolution Will Not Be Televised — The Sustainability Manifesto for Projects

While doing stuff and living life (which seems to take me away from writing) there were a good many interesting things written on project management.  The very insightful Dave Gordon at his blog, The Practicing IT Project Manager, provides a useful weekly list of the latest contributions to the literature that are of note.  If you haven’t checked it out please do so–I recommend it highly.

While I was away Dave posted to an interesting link on the concept of sustainability in project management.  Along those lines three PM professionals have proposed a Sustainability Manifesto for Projects.  As Dave points out in his own post on the topic, it rests on three basic principles:

  • Benefits realization over metrics limited to time, scope, and cost
  • Value for many over value of money
  • The long-term impact of our projects over their immediate results

These are worthy goals and no one needs to have me rain on their parade.  I would like to see these ethical principles, which is what they really are, incorporated into how we all conduct ourselves in business.  But then there is reality–the “is” over the “ought.”

For example, Dave and I have had some correspondence regarding the nature of the marketplace in which we operate through this blog.  Some time ago I wrote a series of posts here, here, and here providing an analysis of the markets in which we operate both in macroeconomic and microeconomic terms.

This came in response to one my colleagues making the counterfactual assertion that we operate in a “free market” based on the concept of “private enterprise.”  Apparently, such just-so stories are lies we have to tell ourselves to make the hypocrisy of daily life bearable.  But, to bring the point home, in talking about the concept of sustainability, what concrete measures will the authors of the manifesto bring to the table to counter the financialization of American business that has occurred of the past 35 years?

For example, the news lately has been replete with stories of companies moving plants from the United States to Mexico.  This despite rising and record corporate profits during a period of stagnating median working class incomes.  Free trade and globalization have been cited as the cause, but this involves more hand waving and the invocation of mantras, rather than analysis.  There has also been the predictable invocations of the Ayn Randian cult and the pseudoscience* of Social Darwinism.  Those on the opposite side of the debate characterize things as a morality play, with the public good versus greed being the main issue.  All of these explanations miss their mark, some more than others.

An article setting aside a few myths was recently published by Jonathan Rothwell at Brookings, which came to me via Mark Thoma’s blog, in the article, “Make elites compete: Why the 1% earn so much and what to do about it”.  Rothwell looks at the relative gains of the market over the last 40 years and finds that corporate profits, while doing well, have not been the driver of inequality that Robert Reich and other economists would have it be.  In looking at another myth that has been promulgated by Greg Mankiw, he finds that the rewards of one’s labors is not related to any special intelligence or skill.  On the contrary, one’s entry into the 1% is actually related to what industry one chooses to enter, regardless of all other factors.  This disparity is known as a “pay premium”.  As expected, petroleum and coal products, financial instruments, financial institutions, and lawyers, are at the top of the pay premium.  What is not, against all expectations of popular culture and popular economic writing, is the IT industry–hardware, software, etc.  Though they are the poster children of new technology, Bill Gates, Mark Zuckerburg, and others are the exception to the rule in an industry that is marked by a 90% failure rate.  Our most educated and talented people–those in science, engineering, the arts, and academia–are poorly paid–with negative pay premiums associated with their vocations.

The financialization of the economy is not a new or unnoticed phenomenon.  Kevin Phillips, in Wealth and Democracy, which was written in 2003, noted this trend.  There have been others.  What has not happened as a result is a national discussion on what to do about it, particularly in defining the term “sustainability”.

For those of us who have worked in the acquisition community, the practical impact of financialization and de-industrialization have made logistics challenging to say the least.  As a young contract negotiator and Navy Contracting Officer, I was challenged to support the fleet when any kind of fabrication or production was involved, especially in non-stocked machined spares of any significant complexity or size.  Oftentimes my search would find that the company that manufactured the items was out of business, its pieces sold off during Chapter 11, and most of the production work for those items still available done seasonally out of country.  My “out” at the time–during the height of the Cold War–was to take the technical specs, which were paid for and therefore owned by the government, to one of the Navy industrial activities for fabrication and production.  The skillset for such work was still fairly widespread, supported by the quality control provided by a fairly well-unionized and trade-based workforce–especially among machinists and other skilled workers.

Given the new and unique ways judges and lawyers have applied privatized IP law to items financed by the public, such opportunities to support our public institutions and infrastructure, as I was able, have been largely closed out.  Furthermore, the places to send such work, where possible, have also gotten vanishingly smaller.  Perhaps digital printing will be the savior for manufacturing that it is touted to be.  What it will not do is stitch back the social fabric that has been ripped apart in communities hollowed out by the loss of their economic base, which, when replaced, comes with lowered expectations and quality of life–and often shortened lives.

In the end, though, such “fixes” benefit a shrinkingly few individuals at the expense of the democratic enterprise.  Capitalism did not exist when the country was formed, despite the assertion of polemicists to link the economic system to our democratic government.  Smith did not write his pre-modern scientific tract until 1776, and much of what it meant was years off into the future, and its relevance given what we’ve learned over the last 240 years about human nature and our world is up for debate.  What was not part of such a discussion back then–and would not have been understood–was the concept of sustainability.  Sustainability in the study of healthy ecosystems usually involves the maintenance of great diversity and the flourishing of life that denotes health.  This is science.  Economics, despite Keynes and others, is still largely rooted in 18th and 19th century pseudoscience.

I know of no fix or commitment to a sustainability manifesto that includes global, environmental, and social sustainability that makes this possible short of a major intellectual, social or political movement willing to make a long-term commitment to incremental, achievable goals toward that ultimate end.  Otherwise it’s just the mental equivalent to camping out in Zuccotti Park.  The anger we note around us during this election year of 2016 (our year of discontent) is a natural human reaction to the end of an idea, which has outlived its explanatory power and, therefore, its usefulness.  Which way shall we lurch?

The Sustainability Manifesto for Projects, then, is a modest proposal.  It may also simply be a sign of the times, albeit a rational one.  As such, it leaves open a lot of questions, and most of these questions cannot be addressed or determined by the people to which it is targeted: project managers, who are usually simply employees of a larger enterprise.  People behave as they are treated–to the incentives and disincentives presented to them, oftentimes not completely apparent on the conscious level.  Thus, I’m not sure if this manifesto hits its mark or even the right one.

*This term is often misunderstood by non-scientists.  Pseudoscience means non-science, just as alternative medicine means non-medicine.  If any of the various hypotheses of pseudoscience are found true, given proper vetting and methodology, that proposition would simply be called science.  Just as alternative methods of treatment, if found effective and consistent, given proper controls, would simply be called medicine.

Sunday Contemplation — Race Matters — Scalia’s Shameful Invocation of Racial Inferiority in 2015

To start out the year 2016 I’ve decided to write about something that has stuck in my craw since the issue first came about.  I find it galling, really, to have to write about something of this sort in the new year of 2016 but it is there nonetheless and I cannot in good conscience not write about it.

The topic at hand is the questioning by Supreme Court Justice Antonin Scalia during oral arguments in the affirmative action case, Fisher vs. University of Texas at Austin.  His comments are well documented but they are worth recounting, only because this line of thinking is shared by a significant proportion of the population.  Below is the full exchange begun by Gregory Garre, the attorney for UT.

Garre:  “If this Court rules that the University of Texas can’t consider race, or if it rules that universities that consider race have to die a death of a thousand cuts for doing so, we know exactly what’s going to happen…Experience tells us that.” (When the use of race has been dropped elsewhere) “diversity plummeted.”

Scalia:  “There are those who contend that it does not benefit African­-Americans to — ­ to get them into the University of Texas where they do not do well, as opposed to having them go to a less­-advanced school, a less ­—­ a slower­-track school where they do well. One of ­­— one of the briefs pointed out that ­­— that most of the — ­­most of the black scientists in this country don’t come from schools like the University of Texas. They come from lesser schools where they do not feel that they’re ­­— that they’re being pushed ahead in ­—­ in classes that are too ­­— too fast for them. I’m just not impressed by the fact that —­­ that the University of Texas may have fewer (black students). Maybe it ought to have fewer.”

Garre:  “This court heard and rejected that argument, with respect, Justice Scalia….frankly, I don’t think the solution to the problems with student body diversity can be to set up a system in which not only are minorities going to separate schools, they’re going to inferior schools.”

I want to get back to Scalia’s comments, but first it is useful to go over the facts of this case, which seem to barely warrant a review by the Supreme Court.  UT admits the overwhelming majority of its students based on the Top Ten Program, that is, if you graduated from a Texas high school within the top 10% of your class, you were admitted if you applied.  In the year that Fisher applied, 92% of the entering class gained admission on that basis.  For the other 8% of seats that were open, as Vox explained, other factors were taken into consideration including based on a “holistic” process.  Two scores were given from this process: one for essays, leadership activities, and background, which included race; and the other based on grades and test scores.  The overwhelming majority of students accepted for admission under this process were white.  Given that the inclusion of race as a factor was not a discriminatory quota, there is little here except to assert, in general, that any consideration of race is unconstitutional under the Equal Protection Clause of the 14th Amendment.

The majority of legal analysis of the Fisher case itself has centered on Grutter vs. Bollinger, mostly because it is the Supreme Court’s latest statement on the issue of affirmative action.  In this case, the Court ruled that University of Michigan Law School did not discriminate when taking race into account among a number of other factors in order to ensure a diverse student body, especially in including previously disenfranchised and excluded minorities, as long as there was a compelling interest in doing so and it passed the definition of “strict scrutiny.”

Given that the Court attempts to maintain continuity and precedent (known by the Latin term stare decisis), the wellspring for this decision was really based on the case of University of California Regents v. Bakke from 1978.  There are two competing constitutional interests at play according to the majority opinion written by Justice Lewis Powell.  One is to ensure that the Equal Protection Clause of the 14th Amendment apply not only to protect the interests of African-Americans in “dialing back the clock to 1868” in a United States that no longer resembles the one when the amendment was passed, but to all persons.  The other is under the academic freedom afforded schools and colleges under the First Amendment known as the “four essential freedoms.”

Forgetting in his argument that Justice Powell was a good constitutional judge but a poor historian, this other interest may come as a surprise to those not familiar with these competing interests.  This is not surprising given the partisan–and many racist–arguments against affirmative action.  Powell invokes two previous cases in outlining the four essential freedoms.  He writes:

“Mr. Justice Frankfurter summarized the “four essential freedoms” that constitute academic freedom:

“`It is the business of a university to provide that atmosphere which is most conductive to speculation, experiment and creation. It is an atmosphere in which there prevail “the four essential freedoms” of a university—to determine for itself on academic grounds who may teach, what may be taught, how it shall be taught, and who may be admitted to study.'”  Sweezy v. New Hampshire, 354 U. S. 234, 263 (1957) (concurring in result).

Our national commitment to the safeguarding of these freedoms within university communities was emphasized in Keyshian v. Board of Regents, 385 U.S. 589, 603 (1967):

“Our Nation is deeply committed to safeguarding academic freedom which is of transcendent value to all of us and not merely to the teachers concerned. That freedom is therefore a special concern of the First Amendment . . . . The Nation’s future depends upon leaders trained through wide exposure to that robust exchange of ideas which discovers truth `out of a multitude of tongues, [rather] than through any kind of authoritative selection.’  United States v. Associated Press, 523 F. Supp. 362, 372.”

What Powell indicated was that, given these conflicting rights (given that no right is absolute), that when the university takes racial factors into account into admissions that there needs to be both a substantial state interest in ensuring diversity, and that strict scrutiny must be applied to such racial or ethnic factoring.  The first time around, when the Supreme Court remanded the Fisher case back to the appellate court in 2013, the majority indicated that while not a quota–and hence not an outright violation of the Equal Protection Clause–that the court had not applied strict scrutiny in determining whether UT had established a substantial state interest.  Or, at least, that’s what it seemed given that the logic which comes down by the Roberts Court is oftentimes sophomoric.

It is important to note that the UT Top Ten Program has increased diversity.  The reason is that the top ten percent, regardless of school, qualify for the program.  This effect is rooted in discrimination in housing patterns that extend back to the late 19th century when, first, Jim Crow laws were passed in the southern states (such as Texas) to essentially re-enslave and disenfranchise the freedmen.  Many people will be surprised to know that these laws continued in force well into the 1960s, the last case being brought to overturn the last vestiges of the race laws in the mid-1970s.  Then in the north beginning in the 1920s, first, local ordinance, and then, when those were struck down, restrictive covenants were applied to keep African Americans and other minority and ethnic groups out of white, Anglo-Saxon protestant neighborhoods.  When restrictive covenants were eventually overturned, real estate brokers and bankers applied the process of “redlining.”  That is, home loans and mortgages were made harder to qualify for or denied to certain racial and ethnic groups.  The map was lined in red to keep people in their “place.”  Ostensibly, this practice was outlawed with the passage of the Fair Housing Act in 1968, but the practice has continued to this day.  Furthermore, for most of our history African Americans have had to pay a premium for better housing that otherwise would have gone for a lower market price.  It was racial fear and manipulation that caused white flight in giving the impression of falling real estate value when African-Americans were allowed to move into a predominately white community.  The sordid history behind this phenomena are amply documented in the National Book Award winning history, Arc of Justice by historian Kevin Boyle.

When one hears political and opinion leaders assert that the housing crisis was caused by diversity targets in sub-prime loans they are not only stating a counterfactual and providing bad economic analysis, but are also engaged in race baiting.  It is redlining that caused minorities to be most vulnerable when the bubble burst because they tended to pay usurious interest rates–or were funneled into subprime balloon mortgages–in order to derive the same benefits of home ownership as other groups, who were afforded more reasonable financing.  Given that most of these are working people living paycheck to paycheck, it takes no great insight to know that they will be the first to default during an unusually severe economic downturn.

Thus, when one considers that most public school funding is derived from real estate property taxes and that the average homeowner based on a 2013 survey stays in their home 13 years (with the historical average varying between 10 and 16 years since 1997), the effects of previously discriminatory practices–and school funding, as well as socio-economic and racial composition–depending the rate of turnover in any particular neighborhood, can last a generation or more.  Despite political arguments to the contrary, monies spent on schools plays a significant role in student achievement.  It would have been appropriate for Justice Powell in Bakke to have at least acknowledged this history as well as the history of new immigrants and minorities that he invoked in his decision in expressing his concern about turning back the clock.

It is important to state clearly that there is no doubt things have improved despite Bakke, and that it was probably a largely well-conceived adjudication.  Despite claims to the contrary, the Great Society and Civil Rights reforms of the 1960s have eliminated the worst de jure and de facto day-to-day indignities, fear of violence, discrimination, and denial of human rights that African-Americans lived under well into the late 20th century.  Opportunities have opened up and it is amazing that over the last 50 years that we can find young African-Americans who have never suffered the indignity of bias or discrimination due to the color of their skin.  But as with the recent problems in policingcriminal justice, and the subtle racism that exists in job selection and opportunities, among other issues, it is apparent that we still have work to do for us to fully overcome our history of slavery, Jim Crow, racism, discrimination, and racial terrorism.  For when one looks back, the fact of the matter is that much of this country–and the basis of its wealth–was built on the backs of African American slavery and oppression.  Without the African-American experience, American culture is indecipherable.  New immigrants, when taking advantage of the inherited advantages of being American also, unknowing to them, inherit the history that made those advantages possible.

But now back to Justice Scalia’s remarks.  Had Scalia restricted himself to the constitutional issues addressed by Powell in Bakke, there would be no concern.  But this is not what the members of this SCOTUS are about.  In the case of Scalia, his remarks would have been at home in the post-Reconstruction south in the late 19th and early 20th century, along with Spencer’s Social Darwinism and eugenics.  This was the period that endorsed separate but “equal” facilities for African-Americans.  Scalia seems to be suggesting a modern version of it in higher education.  But we have seen these ideas invoked fairly recently elsewhere, particularly in the discredited work of Herrnstein and Murray in publishing their work The Bell Curve.  His comments are what is called “Begging the Question.”  Scalia “begs the question” in assuming in his remarks that African-Americans are not qualified generally for UT, and that they do not possess the mental or educational skills to succeed there.  His remarks also reveal someone who thinks in terms of hierarchy and aristocracy, that there are levels of human fitness and superiority, which also underlies such concepts as “makers” and “takers.”

Apologists in academia and elsewhere have attempted to temper the justice’s words by invoking the concept of mismatch in college admissions.  It is often referred to as a “theory” but that would elevate it to have an authority that it does not possess.  It is Cargo Cult Social Science based on loosely correlated statistics that provides a veneer of respectability to those who still seek to explain inequality in a society that claims fancifully to be meritocratic, or egalitarian, or a land of opportunity, but which is not really any of these.  But that is not the underlying assumption in Scalia’s remarks.  He begins with calling out African Americans (and restricts himself to African Americans among minorities) and then goes from there.  Further studies on mismatch (link below) show that it is a common phenomenon which affects all racial and ethnic groups.  No system is perfect, and especially not one conceived by society or academia.

But even putting aside the racist assumptions of Justice Scalia, does the mismatch concept even pass the “so-what?” test?  What if one is thrown into a situation for which they are poorly prepared?  In real life we call this “sink or swim.”  Does it really do harm?  There are all kinds of casuistry put forth but, despite assertions to the contrary, the facts are not conclusive.

To give but one famous historical example that undermines this bit of sophistry, the fact that General Lee graduated second in his West Point class and U.S. Grant graduated in the bottom half no doubt influenced them later in life.  Lee was able to defeat with great skill generals who were unused to defeat and disappointment, and routed them from the field.  But his supreme confidence in his abilities caused his utter failure at Gettysburg.  Grant, on the other hand, who experienced failure both as a civilian and on the battlefield, grew unafraid of it and succeeded in the end.  The fact that someone experiences a setback or must work hard in order to succeed is not such a bad thing.  It is how the individual reacts in the face of disappointment or long odds that we call character.  It is the standard means of training Naval officers at sea and why mentoring is so important.  (Only puffed up college professors don’t feel that their job is teaching).  Yes, the world is a big place; yes, there are things you don’t know, but absent a severe learning or emotional disability you can learn them.

But seeing this self-evident insight would assume rational thought and evidence.  For example, many of the characteristics attributed to African-Americans in The Bell Curve have since been overcome, such as significantly rising math and reading scores on standardized tests that are closing the gap with white achievement.  If these were innate or unsolvable deficiencies how was it possible that public policy is alleviating the gap?  Does it harm African-Americans to be challenged to do so?  Given the disreputable history of race in America what is more likely: that African-Americans are innately less likely to succeed at UT (and increasing numbers entered under the Top Ten Program), or that the history of unequal educational opportunity deserves to be addressed in the most equitable and constitutional manner?  Or that unequal treatment and the socio-economic effects of economic discrimination, which still exist, have a great effect on minorities that require a reasoned assessment of the individual in taking into account mitigating circumstances, including racial or ethnic background, in college admissions for those on the fence?

That a Supreme Court justice can voice such stupidity and bias in the year 2015 is evidence enough that there is something wrong with our judicial system.  I beg to differ with the proposition voiced by the late Senator Roman Hruska in defending Nixon’s appointment of G. Harold Carswell to the Supreme Court (which was rejected), that mediocrity deserves representation on the court.  While we can’t always find a Brandeis, Cardozo, or Frankfurter (or a Holmes, Brennan, Black, Story, or Warren), we can at least attempt to do so.  Unfortunately we are stuck with a Scalia and his ilk.

Legitimacy and the EU Democratic Deficit

Turning to political science again, Kevin O’Rourke has an important article regarding the democratic deficit and types of legitimacy in Critical Quarterly, particularly in light of the events surrounding the Greek crisis.  He cites the late political scientist Peter Mair’s book, Ruling the Void, as providing a solid framework for understanding what is happening in Europe, and to some extent within all democracies as a result of wealth and power concentration among an increasingly transnational economic elite.

The issue that O’Rourke tackles based on Mair’s insights, is one of democratic legitimacy.  For economists and financiers who seem to have (I would argue) taken an illegitimately outsized role in determining what is good for Greece, even if Greece disagrees, the dichotomy here seems to be between what has been called input vs. output legitimacy.  I understand what he is saying here, but in political science “legitimacy” is not the same as “democratic legitimacy” and, in the end I think, this is his point.

O’Rourke, an economist himself, tackles how using this argument, particularly in regard to output legitimacy, has been hijacked so that concerns about distribution have been stripped out of the definition by the application of technocrat-speak.  I have a backlog of items for the Devil’s Phraseology from “Structural Reform” to other euphemisms for, essentially, screwing working people over, especially right now if they are Greek, Italian, Spanish, or Irish.

His article is important in tracing the subtle transformation of legitimacy over time.  For those unfamiliar with this terminology, legitimacy in this sense–if you remember nothing else but your Lincoln or Jefferson–in democratic societies is properly derived by the people.  This concept, which can be measured on the input side, is reinforced by processes and institutions that support it.  So clean elections which seek to maximize participation of the adult population; freedoms that support human rights, particularly those concerning speech, free association, and free movement; institutions that are largely responsive to the democratic will but which possess limitations to prevent the denial of human rights; and an independent judiciary that metes out justice based on due process; the absence of corruption, undue influence, unequal treatment, or graft in these institutions, etc. are all indicators of “legitimacy.”  In the context of the European debate this is known as “input” legitimacy.

Then there is “output” legitimacy.  This is the type of legitimacy on which the EU rests, since it obviously–especially since the last Greek referendum on the terms of the Troika’s terms–doesn’t seem to be based on any kind of “input” legitimacy.  Here legitimacy is based on a utilitarian measure–the ability of the EU to raise the aggregate euro value at any one time.  This is the “rising tide lifts all boats” trope.  Nice imagery, what with the JFK connection and all, but the rules of the game and economic environment have changed since 1963 to the extent that the analogy no longer applies.  A rising tide lifts all boats only if everyone has a stake in the tide rising.  Feel free to add any additional analogies now that we are beginning to understand the effect of rising tides on coastal cities as the earth warms.  An actual rising tide certainly didn’t do anyone in NOLA’s Lower Ninth and Lakeside neighborhoods any favors, but we do know that it impacted people residing in different economic strata differently.

Furthermore, output legitimacy as a utilitarian project sounds a lot like “we made the trains run on time”.  Furthermore, it wasn’t all that long ago that more than half of Europe suffered under authoritarian regimes.  Output legitimacy, I would argue, by definition is the opposite of democratic legitimacy, not one of two types of democratic legitimacy.  As O’Rourke points out, one cannot take politics out of policy, so the way in which decisions are made is important in defining the type and level of legitimacy.

Post-1989 generations have not had to come to an understanding of the fact that even oppressive regimes can possess political legitimacy that is sufficient for them to survive.  From an historical standpoint, all of those German people in the streets chanting “Heil Hitler” weren’t doing so at gun point.  The block captains and those others who denounced family members in the old Eastern Block countries largely acted independently and voluntarily.  Many Russians today pine for the days under the old Soviet Union and have a leader in Putin that channels that nostalgia.  Autocratic and authoritarian regimes simply possess legitimacy through institutions and processes that are more restrictive than those found in democratic societies, but which rests on elites, centers of power, and pluralities that allow them to function.

Thus, whether the EU will admit it publicly or not, one need only do a Google search to see that this is a generally recognized issue that the European countries seem unwilling or unable to address.  The recent charging of Yanis Varoufakis, the former Greek minister, of treason at the instigation of Greek and European elites raises the ante and strips whatever remaining veil there was to hide the anti-democratic roots of the Greek crisis.  Apparently the 60% of the Greek people who voted “No” to the Troika were also traitors.

That this is happening is Greece is also problematic due to its geographical location in the eastern Mediterranean and its fairly recent transition to democratic processes and institutions.  De-legitimization of democracies is an all too familiar event in the history of the European continent and can only lead to radicalization, especially given the pain being inflicted on the Greek people.  What Europe’s technocrats have done is turn an economic recession and market failure–that could have been ameliorated and solved given the proper solutions learned by hard experience from the 1930s and immediately following the Second World War–rejected those methods and, as a result, though obstinance, tyrannical actions, corruption, and greed, have created a political and economic disaster that threatens the legitimacy of the EU.

Time to reform the reformers.

Welcome to the Hotel (Euro) — You Can Vote “Oxi” Anytime you Like but you Can Never Leave

The recent events in Greece and their ramifications for the European project have been the subject of a good many blogs and news articles lately.  From an economic perspective the most noteworthy are those by Paul Krugman, Brad DeLong, Dean Baker, Yanis Varoufakis who was on the ground as Greece’s Finance Minister, and Joseph Stiglitz, among others.

If one were to read the news in the manner of the U.S. press through its sources of record: The New York Times, Wall Street Journal, Washington Post, not to mention the major news networks with CNN thrown in (avoiding avowedly polemical sources like Fox, MSNBC, and the Huffington Post), one would think that the Greek issue is one caused by a profligate country that borrowed a bit too much and allowed Greeks to live over their heads.  Nothing could be further from the truth.

The bottom line is that Greece and the EU decided to bail out the banks and investors who crossed the line in investing in junk paper by using public funds.  Sound familiar?  Think a Eurozone TARP.  But in the case of the EU the banks and bad paper investment houses–the inmates in this scenario–run the asylum.  With the constant drumbeat from our own oligarchs we have become as a people brainwashed to think that investors and creditors have a right to their money.  Our own more draconian bankruptcy laws imposed by the last financial industry-tainted Congress institutionalized many of these attitudes in law.  But this has not always not been the case and is not part of our legal or economic traditions.  It is certainly not anything like what Adam Smith had in mind.

The operational term in this case is “moral hazard.”  The rhetoric of the moneyed interests and their armies of lawyers have deftly tried to invert the meaning of the concept, but as Steve Waldman clearly explains in his excellent interfluidity blog, “moral hazard” is a concept that overwhelmingly falls on investors and creditors.  It means, quite simply, that you as an investor are responsible for where you put your money at risk–and that risk includes it being completely lost.  There are no guarantees.  This was the original rationale of Glass-Steagall: it was accepted that regular working people don’t have the information, time or resources to place their funds, which are meant for savings, under moral hazard.  Same goes for things like the Social Security Trust Fund.  Play with your own “play” money if you have it, but guaranteed deposits and retirement pensions are off-limits since they are backed by the sovereign currency.  Seed corn is not open to being manipulated by cheap paper–that is, until Glass-Steagall was repealed.

The European condition is a bit more complicated only because the EU has never had a consistent separation between its financial elites and civic institutions, given the differences in national traditions and political systems.  But one should be able to clearly see the connection between what is happening in Europe within the EU and in other places around the world: the attack on democratic legitimacy by oligarchs and economic elites.

As Joe Stiglitz points out in the post cited above, Greece–emerging from years of autocratic rule and third world-like conditions–was doing quite well economically until the financial bubble burst across the developed western world.  Many of the banks that invested in hyper-inflated Greek real estate and other development projects were situated in other countries.  The EU under the Euro project is a currency union, and under the Maastricht Treaty that formed this union there were some economic harmonization rules required, mostly pushed by Germany and France, but there is no lender of last resort, no central banking authority equivalent to our Federal Reserve, no centralized budget authority, nor political or cultural ties that would keep old ethnic or nationalist conflicts from flaring up in a crisis.  As Waldman explains, what these other countries did–in particular Germany–was to bail out the banks and investment houses making the debt on these bad investments public obligations.  This sleight of hand politicized what should otherwise should have simply been written off bad investments.  If the Germans wanted to have their own TARP they should have done so.  But it was so much easier to push the consequences onto the Greeks given their weaker position in the EU.

Jared Bernstein in his Washington Post article following the Greek “no” vote quoted an unnamed German economist asserting: “How do you think the people of Manhattan would like bailing out Texas?”  As Krugman rejoined upon reading the article, Manhattan (and other areas of the country) do that all the time as a matter of course.  It was done during the Savings & Loan crisis that was largely a Texas affair back in the late 1980s.  Anyone who looks at the net benefits of federal tax payments and expenditures by state can see that the southeastern states–in particular those that made up the old Confederacy, including Texas, get more in federal benefits than they pay in.  To Americans this is not a big deal–and my use of the term American to identify my countrymen is at the heart of the question.  I don’t know anyone who in reality is a Floridian.  Only buffoons and idiots identify themselves as Texans over their identity as Americans.

Here we tend to put resources where they are needed, hence the United States of America.  More than two hundred years involving waves of immigrants, over one hundred and fifty years of increasing population mobility, and two major world wars and a cold one–two of these existential in nature–during the 20th century, not to mention 9-11, has militated against the old regionalism.  It is not surprising that the assertion that displays such deep ignorance of our own system and society would come from a German economist.  I mean this as no mean insult.

When I was on active duty as a young U.S. Naval officer I met a Scottish couple in Spain who worked at the U.K. embassy there.  They were amazed by my nonchalance in moving my family from California to a home base in Virginia as part of my assignment.  “Do you now identify yourself as a Virginian?” they asked.  When I explained that–no–it was all part of the U.S., they explained that they would always identify themselves as Scots, and that within Scotland that people associated themselves with a particular village or region.  This was almost 30 years ago, and I am told that such attitudes are changing, but it does point to a weakness in the “European” project, especially given that in the most recent U.K. parliamentary elections that the Scottish nationalist party was overwhelming elected to the House of Commons.

Given my own expertise in history and political science, my concern is directed more to the consequences of Greece capitulating to the EU’s economically and politically disastrous demands.  Just ten days ago 60% of the Greek people voted against the conditions imposed by the EU, yet their parliament just approved a package that is punitive by any reasonable and objective measure.  Even the IMF has belatedly–and in a largely dishonest manner which I can only explain as some type of EU face-saving approach–clearly stated that the conditions imposed are unsustainable.

The role of Germany is certainly one of the main issues in this condition.  Given the way they handled the bad paper of their bankers, Merkel and her party have backed themselves into a corner.  So they have done what all desperate politicians do–they have demonized the Greeks.  This is all for mercenary purposes, of course, and without consideration for the long term consequences for both the Greek people and the EU.  What they have done is show the contradictory fault lines in the entire “European” project.  German Finance Minister Schaubel, by attempting to co-opt the Greek threat of a Euro exit by making such terms seem disastrous, has inadvertently made such an exit virtually inevitable.  Greece, not wanting to be left out of “Europe” has just voted against its own best interests, its government never really having a strategy for a “Grexit” because they assumed that their negotiating partners were both rational and well-meaning.  The government very well may fall as a result.

For what the Greek crisis has shown is that the European project under the Euro is neither a completely democratic one nor is it “European.”  The elites in Brussels certainly felt that they had no obligation to consider the Greek referendum on the bailout terms.  To them only the banks, the oligarchs, and the political survival of the political parties in the main assemblies of the nations that support them matter.  The “democratic deficit” of the EU, in the words of the late historian Tony Judt, and the damage that it can cause, is now on full display.  It is not yet clear what will happen, given the contradictory impulses of countries wanting to stay within the single market that “Europe” afford them, the cultural and psychological association to be part of the project, and the punishing loss of national autonomy and democratic legitimacy as the price that must be paid. (Aside from the economic depression and poverty conditions imposed by the EU as the Greeks follow the conditions imposed on them).

One final note:  I can’t help but be impressed by the ideological arguments being used as a matter of course for “helping” the Greek people in the long run.  As John Maynard Keynes noted, in the long run we are all dead.  The tremendous amount of suffering imposed by the EU on the Greek people for their own long-term good sounds much like the fabrications of the old Communists of the former Eastern Block countries who inflicted all sorts of horrors on their own populations for the “long term” good of reaching the perfect socialist state.  Now such arguments are deployed in favor of the perfect capitalist state.  It is “reform” turned on its head, like “moral hazard.”

 

 

 

 

Upper Volta with Missiles — Overreach, Putin, and the Russian Crash

Starting out the new year with some additional notes on international affairs.

The reference in the title is from a comment from former German Chancellor Helmut Schmidt in once referring to the Soviet Union.  Of course, as Tony Judt noted in his magisterial book Postwar: A History of Europe Since 1945, there are those missiles.  Thus, this is a topic of concern to everyone, particularly in regard to the events surrounding Crimea and Ukraine.  This past April I noted the threat implicit in Putin’s actions and the need for European solidarity in opposing his actions to maintain the peace and stability of the region.  When combined with Russian violations of nuclear arms treaties this is cause for concern.

Since April much has happened, including measured sanctions by the European Union and the United States, to prevent the Russian Federation from leveraging its economic power to gain an advantage over Ukrainian sovereignty.  In addition, the depressed state of the world economy, among other factors, has created an oil glut that has also reduced Russia’s ability to leverage its oil reserves against any countries that would oppose it.  As a result, the ruble has taken a hit and Russia has made all of the wrong moves to bolster its currency.

On the middle point, certain notable voices here in the United States have pointed to an increase in oil production as the main cause but the numbers do not support this contention.  Instead, a combination of factors: alternative energy production, more efficient fuel consumption, and a drop in consumer demand have all conspired to, well, act as a market is supposed to behave.  Combine this with the refusal of major producers to reduce output to manipulate the market in order to prop up the price and you have what commodities do most often–rise and fall on the whims of the demand of the moment.  I have no doubt that eventually the world economy will recover, but keep in mind that the very real threat of Global Warming will continue to drive societies to find alternatives to fossil fuel.  That is, given that they continue to recognize the existential threat that it poses to humanity (aside from the dysfunctional geopolitics that fossil fuels seem to drive).  In the meantime, seeing the handwriting on the wall, net exporters like Saudi Arabia have little incentive to reduce production when they can sell as much as possible and gain a larger share of the market against their competitors.

For the uninitiated like Fifth Column blogger Patrick Smith at Salon.com, who apparently only sees conspiracies and control of a kind that–well–actually exists in Putin’s Russia, this is known as market competition.  Nary a peep from Mr. Smith has emanated lately (or from our own right wing plutocrats) about the Russian oligarch being a statesman running rings around our democratically-elected U.S. president or his decorated former U.S. Navy officer (and later antiwar activist) Secretary of State.  Were it only possible for the state controlled Russian press to have the freedom to make such alternative observations of its own leadership in their country.  Okay–enough sarcasm for today, but I think I made my point: mendacity and irrationality make for strange bedfellows.

Along these lines some interesting insights about Putin’s Russia have come out in the book entitled Putin’s Kleptocracy: Who Owns Russia? by Karen Dawisha.  This is a brave undertaking given that a lot of critical writing about Russia, apart from the abolition of a free press there, has been taken down from websites.  This is not because of some mysterious ability on the part of Putin and his cronies but because of their immense international (until recently) financial power and the expensive lawyers that such money can buy.  Cambridge University Press, for example, because of the U.K.’s lax libel laws, declined to publish the book.  Thus, a U.S. publisher had to be found.  In addition, Russia has bought off columnists and politicians around the world to muddy the waters about the reality of the regime.  A very enlightening review of the book and the history surrounding it appears in The New York Review of Books by Washington Post and Slate columnist Anne Applebaum.

In summary, Dawisha’s book demonstrates that during the period when Gorbachev was desperately attempting to reform a crumbling and inefficient system that had plodded along under the Brezhnev doldrums, that KBG agents like Putin were moving Russian currency assets aboard in Europe with the intent of eventually using their economic leverage to retake the country when all of the hullaballoo blew over.  Thus, rather than a failing attempt at liberalization and democracy, what we see is the reinstitution of authoritarian rule after a brief respite.  The same old corrupt elites that had run the old Soviet Union under central planning are now simply wearing capitalist oligarch clothing.  This probably explains why the Russian central bank is moving to bolster the ruble through higher interest rates, which will only exacerbate the economic collapse.  But the general welfare is not their concern.  It’s all about the value of Russian reserves and the economic leverage that such value and power lends to control.

Globalization has made this a small world, but one still fraught with dangers.  For companies in my industry and policymakers here in the United States, I would recommend that a wall of separation be established from companies–particularly those technology companies in information systems–with ties to Russian oil and its oligarchs.