Ch- Ch- Changes–What I Learned at the NDIA IPMD Meeting and Last Thoughts on POGO DCMA

Hot Topics at the National Defense Industrial Association’s Integrated Program Management Division (NDIA-IPMD)

For those of you who did not attend, or who have a passing interest in what is happening in the public sphere of DoD acquisition, the NDIA IPMD meeting held last week was a great importance. Here are the highlights.

Electronic Submission of Program Management Information under the New Proposed DoD Schema

Those who have attended meetings in the past, and who read this blog, know where I stand on this issue, which is to capture all of the necessary information that provides a full picture of program and project performance among all of its systems and subsystems, but to do so in an economically feasible manner that reduces redundancy, reduces data streams, and improves timeliness of submission. Basic information economics state that a TB of data is only incrementally more expensive–as defined by the difference in the electricity generated–as a MB of data. Basic experience in IT management demonstrates that automating a process that eliminates touch labor in data production/validation improves productivity and speed.

Furthermore, if a supplier in complex program and project management is properly managing–and has sufficient systems in place–then providing the data necessary for the DoD to establish accountability and good stewardship, to ensure that adequate progress is being made under the terms of the contract, to ensure that contractually required systems that establish competency are reliable and accurate, and to utilize in future defense acquisition planning–should not be a problem. We live in a world of 0s and 1s. What we expect of our information systems is to do the grunt work handling ever large systems in providing information. In this scenario the machine is the dumb one and the person assessing the significance and context of what is processed into intelligence is the smart one.

The most recent discussions and controversies surrounded the old canard regarding submission at Control Account as opposed to the Work Package level of the WBS. Yes, let’s party like it’s 1997. The other issue was whether cumulative or current data should be submitted. I have issues with both of these items, which continue to arise like bad zombie ideas. You put a stake in them, but they just won’t die.

To frame the first issue, there are some organizations/project teams that link budget to control account, and others to work package. So practice is the not determinant, but it speaks to earned value management (EVM).The receiving organization is going to want the lowest level for reporting where there is foot-and-tie to not only budget, but to other systems. This is the rub.

I participated in an still-unpublished study for DoD that indicated that if one uses earned value management (EVM) exclusively to manage that it doesn’t matter. You get a bit more fidelity and early warning at the work package level, but not much.

But note my conditional.

No one exclusively uses EVM to manage projects and programs. That would be foolish and seems to be the basis of the specious attack on the methodology when I come upon it, especially by baby PMs. The discriminator is the schedule, and the early warning is found there. The place where you foot-and-tie schedule to the WBS is at the work package level. If you are restricted to the control account for reporting you have a guessing game–and gaming of the system–given that there will be many schedule activities to one control account.

Furthermore, the individual reviewing EVM and schedule will want to ensure that the Performance Measurement Baseline (PMB) and the Integrated Master Schedule (IMS) were not constructed in isolation from one another. There needs to be evidence that the work planned under the cost plan matches the work in time.

Regarding cumulative against current dollar submission the issue is one of accuracy. First, consecutive cumulative submissions require that the latest figure be subtracted from the last, which causes round-up errors–which are exacerbated if reporting is restricted to the control account level. NDIA IPMD had a long discussion on the intrinsic cumulative-to-cumulative error at a meeting last year, which was raised by Gary Humphreys of Humphreys & Associates. Second, cumulative submissions often hide retroactive changes. Third, to catch items in my second point, one must execute cross checks for different types of data, rather than getting a dump from the system of record and rolling up. The more operations and manipulation made to data, the harder it becomes to ensure fidelity and get everyone to agree on one trusted source, that is, in reading off of the same page.

When I was asked about my opinion on these issues, my response was twofold. First, as the head of a technology company it doesn’t matter to me. I can handle data in accordance with the standard DoD schema in any way specified. Second, as a former program management type and as an IT professional with an abiding dislike of inefficient systems, the restrictions proposed are based on the limitations of proprietary systems in use by suppliers that, in my opinion, need to be retired. The DoD and A&D market is somewhat isolated from other market pressures, by design. So the DoD must artificially construct incentives and an ecosystem that pushes businesses (and its own organizations) to greater efficiency and innovation. We don’t fly F-4s anymore, so why continue to use IT business systems designed in 1997 that are solely supported by sunk-cost arguments and rent seeking behavior?

Thus, my recommendation was that it was up to the DoD to determine the information required to achieve their statutory and management responsibilities, and it is up to the software solution providers to provide the, you know, solutions that meet them.

I was also asked if I agreed with another solution provider to have the software companies have another go at the schema prior to publication. My position was consistent in that regard: we don’t work the refs. My recommendation to OSD, also given that I have been in a similar position regarding an earlier initiative long the same lines back when I wore a uniform, is to explore the art of the possible with suppliers. The goals are to reduce data streams, eliminate redundancy, and improve speed. Let the commercial software guys figure out how to make it work.

Current projection is three to four weeks before a final schema is published. We will see if the corresponding documentation will also be provided simultaneously.

DCMA EVAS – Data-driven Assessment and Surveillance

This is a topic for which I cannot write without a conflict of interest since the company that is my day job is the solution provider, so I will make this short and sweet.

First, it was refreshing to see three Hub leads at the NDIA IPMD meeting. These are the individuals in the field who understand the important connection between government acquisition needs and private industry capabilities in the logistics supply chain.

Second, despite a great deal of behind-the-scenes speculation and drama among competitors in the solution provider market, DCMA affirmed that it had selected its COTS solution and that it was working with that provider to work out any minor issues now that MIlestone B has been certified and they are into full implementation.

Third, DCMA announced that the Hubs would be collecting information and that the plan for a central database for EVAS that would combine other DoD data has been put on hold until management can determine the best course for that solution.

Fourth, the Agency announced that the first round of using the automated metrics was later this month and that effort would continue into October.

Fifth, the Agency tamped down some of the fear related to this new process, noting that tripping metrics may simply indicate that additional attention was needed in that area, including those cases where it simply needed to be documented that the supplier’s System Description deviated from the standard indicator. I think this will be a process of familiarization as the Hubs move out with implementation.

DCMA EVAS, in my opinion, is a significant reform of the way the agency does business. It not only drives process and organizational improvement within the agency by eliminating uneven and arbitrary determinations of contract non-compliance (as well as improvements in data management), but opens a dialogue regarding systems improvement, driving similar changes to the supplier base.

NDAA Section 804

There were a couple of public discussions on NDAA Section 804 which, if you are not certain what it is, should go to this link. Having kept track of developments in the NDAA for this coming fiscal year, what I can say is that the final language of Section 804 doesn’t say what many think it says when it was in draft.

What it doesn’t authorize is a broad authority to overrule other statutory requirements for government accountability, oversight, and reporting, including the requirement for earned value management on large programs. This statement is supported by both OSD speakers that addressed the issue in the meeting.

The purpose of Section 804 was to provide the ability to quickly prototype and field new technologies in the wake of 911, particularly as it related to identifying, tracking, and preventing terrorist acts. But the rhetoric behind this section, which was widely touted by elected representatives long before the final version of the current NDAA had been approved, implied a broader mandate for more prosaic acquisitions. My opinion in having seen programs like this before (think Navy A12 program) is that, if people use this authority too broadly that we will be discussing more significant issues than a minor DCMA program that ends this blog post.

Thus, the message coming from OSD is that there is no carte blanche get-out-of-jail card for covering yourself under Section 804 and deciding that lack of management is a substitute for management, and that failure to obtain timely and necessary program performance information does not mean that it cannot be forensically traced in an audit or investigation, especially if things go south. A word to the wise, while birds of a feather catch cold.

DoD Reorganization

The Department of Defense has been undergoing reorganization and the old Office of the Undersecretary of Defense for Acquisition, Technology, and Logistics (OUSD (AT&L) has been broken up and reassigned largely to a new Undersecretary of Defense for Acquisition and Sustainment (USD (A&S).

As a result of this reorganization there were other points indicated:

a. Day-to-day program management will be pushed to the military services. No one really seems to understand what this means. The services already have PMOs in place that do day-to-day management. The policy part of old AT&L will be going intact to A&S as well as program analysis. The personnel cuts that are earmarked for some DoD departments was largely avoided in the reorganization, except at the SES level, which I will address below.

b. Other Transaction Authority (OTA) and Section 804 procurements are getting a lot of attention, but they seem ripe for abuse. I had actually was a member of a panel regarding Acquisition Reform at the NDIA Training and Simulation Industry Symposium held this past June in Orlando. I thought the focus would be on the recommendations from the 809 panel but, instead, turned out to be on OTA and Section 804 acquisitions. What impressed me the most was that even companies that had participated in these types of contracting actions felt that they were unnecessarily loosely composed, which would eventually impede progress upon review and audit of the programs. The consensus in discussions with the audience and other panel members was that the FAR and DFARS already possessed sufficient flexibility if Contracting Officers were properly trained to know how to construct such a requirement and still stay between the lines, absent a serious operational need that cannot be met through normal acquisition methods. Furthermore, OTA SME knowledge is virtually non-existent. Needless to say, things like Nunn-McCurdy and new Congressional reporting requirements in the latest NDAA still need to be met.

c. The emphasis in the department, it was announced, would also shift to a focus on portfolio analysis, but–again–no one could speak to exactly what that means. PARCA and the program analysis personnel on the OSD staffs provide SecDef with information on the entire portfolio of major programs. That is why there is a DoD Central Repository for submission of program data. If the Department is looking to apply some of the principles in DoD that provide flexibility in identifying risks and tradeoffs across, then that would be most useful and a powerful tool in managing resources. We’ve seen efforts like Cost as an Independent Variable (CAIV) and other tradeoff methods come and go, it would be nice if the department would reward the identification of programmatic risk early and often in program go/no-go/tradeoff/early production decisions.

d. To manage over $7 trillion dollars of program PARCA’s expense is $4.5M. The OSD personnel made this point, I think, to emphasize the return on investment in their role regarding oversight, risk identification, and root cause analysis with an eye to efficiency in the management of DoD programs. This is like an insurance policy and a built-in DoD change agent. But from my outside reading, there was a move by Representative Mac Thornberry, who is Chairman of House Armed Services, to hollow out OSD by eliminating PARCA and much of the AT&L staffs. I had discussions with staffs for other Congressional members of the Armed Services Committee when this was going on, and the cause seemed to be that there is a lack of understanding to the extent that DoD has streamlined its acquisition business systems and how key PARCA, DCMA, and the analysis and assessment staffs are to the acquisition ecosystem, and how they foot and tie to the service PEOs and PMOs. Luckily for the taxpayer, it seems that Senate Armed Services members were aware of this and took the language out during markup.

Other OSD Business — Reconciling FAR/DFARS, and Agile

a.. DoD is reconciling differences between overlapping FAR and DFARS clauses. Given that DoD is more detailed and specific in identifying reporting and specifying oversight of contracts by dollar threshold, complexity, risk, and contract type, it will be interesting how this plays out over time. The example given by Mr. John McGregor of OSD was the difference between the FAR and DFARS clauses regarding the application of earned value management (EVM). The FAR clause is more expansive and cut-and-dried. The DFARS clause distinguishes the level of EVM reporting and oversight (and surveillance) that should apply based on more specific criteria regarding the nature of the program and the contract characteristics.

b. The issue of Agile and how it somehow excuses using estimating, earned value management, risk management, and other proven program management controls was addressed. This contention is, of course, poppycock and Glen Alleman on his blog has written extensively about this zombie idea. The 809 Panel seemed to have been bitten by it, though, where its members were convinced that Agile is a program or project management method, and that there is a dichotomy between Agile and the use of EVM. The prescient point in critiquing this assertion was effectively made by the OSD speakers. They noted that they attend many forums and speak to various groups about Agile, and that there is virtually no consensus about what exactly it is and what characteristics define it, but everyone pretty much recognizes it as an approach to software development. Furthermore, EVM is used today on programs that at least partially use Agile software development methodology and do so very effectively. It’s not like crossing the streams.

Gary Bliss, PARCA – Fair Winds and Following Seas

The blockbuster announcement at the meeting was the planned retirement of Gary Bliss, who has been and is the Director of PARCA, on 30 September 2018. This was due to the cut in billets at the Senior Executive Service (SES) level. He will be missed.

Mr. Bliss has transformed the way that DoD does business and he has done so by building bridges. I have been attending NDIA IPMD meetings (and under its old PMSC name) for more than 20 years. Over that time, from when I was near the end of my uniformed career in attending the government/joint session, and, later, when I attended full sessions after joining private industry, I have witnessed a change for the better. Mr. Bliss leaves behind an industry that has established collaboration with DoD and federal program management personnel as its legacy for now and into the future.

Before the formation of PARCA all too often there were two camps in the organization, which translated to a similar condition in the field in relation to PMOs and oversight agencies, despite the fact that everyone was on the same team in terms of serving the national defense. The issue, of course, as it always is, was money.

These two camps would sometimes break out in open disagreement and expressed disparagement of the other. Mr. Bliss brought in a gentleman by the name of Gordon Kranz and together they opened a dialogue in meeting PARCA’s mission. This dialogue has continued with Mr. Kranz’s replacement, John McGregor.

The dialogue has revolved around finding root causes for long delays between development and production in program management and to recommend ways of streamlining processes and eliminating impediments, to root out redundancy, inefficiency, and waste throughout the program and project management supply chain, and to communicate with industry so that they understand the reasons for particular DoD policies and procedures, to obtain feedback on the effects of those decisions and how they can implemented to avoid arbitrariness, and to provide certainty to those who would seek to provide supplies and services to the national defense–especially innovative ones–in defining the rules of engagement. The focus was on collaborative process improvement–and it has worked. Petty disputes occasionally still arise, but they are the exception to the rule.

Under his watch Mr. Bliss established a common trusted data stream for program management data, and forged policies that drove process improvement from the industrial base through the DoD program office. This was not an easy job. His background as an economist and his long distinguished career in the public service armed him well in this regard. We owe him a debt of gratitude.

We can all hope that the next OSD leadership that assumes that role will be as effective and forward leaning.

Final Thoughts on DCMA report revelations

The interest I received on my last post on the DCMA internal report regarding the IWMS project was very broad, but the comments that I received expressed some confusion on what I took away as the lessons learned in my closing paragraphs. The reason for this was the leaked nature of the reports, which alleged breaches of federal statute and other administrative and professional breaches, some of a reputational nature. They are not the final word and for anyone to draw final conclusions from leaked material of that sort would be premature. But here are some initial lessons learned:

Lesson #1: Do not split requirements and game the system to fall below financial thresholds to avoid oversight and management approval. This is a Contracts 101 issue and everyone should be aware of it.

Lesson #2: Ensure checks and balances in the procurement process are established and maintained. Too much power, under the moniker of acquisition reform and “flexibility”, has given CIOs and PMs the authority to make decisions that require collaboration, checks, and internal oversight. In normative public sector acquisition environments the end-user does not get to select the contractor, the contract type, the funding sources, or the acquisition method involving fair and open competition–or a deviation from it. Nor having directed the procurement, to allow the same individual(s) to certify receipt and acceptance. Establishing checks and balances without undermining operational effectiveness requires a subtle hand, in which different specialists working within a matrix organization, with differing chains of command and responsibility, ensure that there is integrity in the process. All members of this team can participate in planning and collaboration for the organizations’ needs. It appears, though not completely proven, that some of these checks and balances did not exist. We do know from the inspections that Contracting Officer’s Representatives (CORs) and Contracting Officers’s Technical Representatives (COTRs) were not appointed for long-term contracts in many cases.

Lesson #3: Don’t pre-select a solution by a particular supplier. This is done by understanding the organization’s current and future needs and putting that expression in a set of salient characteristics, a performance work statement, or a statement of work. This document is authored to share with the marketplace though a formalized and documented process of discovery, such as a request for information (RFI).

Lesson #4: I am not certain if the reports indicate that a legal finding of the appropriate color of money is or is not a sufficient defense but they seem to. This can be a controversial topic within an organization and oftentimes yields differing opinions. Sometimes the situation can be corrected with the substitution of proper money for that fiscal year by higher authority. Some other examples of Anti-deficiency Act (ADA) violations can be found via this link, published by the Defense Comptroller. I’ve indicated from my own experience how, going from one activity to another as an uniformed Navy Officer, I had run into Comptrollers with different opinions of the appropriate color of money for particular types of supplies and services at the same financial thresholds. They can’t all have been correct. I guess I am fortunate that over 23 years–18 of them as a commissioned Supply Corps Officer* and five before that as an enlisted man–that I never ran into a ADA violation in any transaction in which I was involved. The organizations I was assigned to had checks and balances to ensure there was not a statutory violation which, I may add, is a federal crime. Thus, no one should be cavalierly making this assertion as if it were simply an administrative issue. But everyone in the chain is not responsible, unless misconduct or criminal behavior across that chain contributed to the violation. I don’t see it in these reports.Systemic causes require systemic solutions and education.

Note that all of these lessons learned are taught as basic required knowledge in acquisition classes and in regulation. I also note that, in the reports, there are facts of mitigation. It will be interesting to see what eventually comes out of this.

Don’t Stop Thinking About Tomorrow–Post-Workshop Blogging…and some Low Comedy

It’s been a while since I posted to my blog due to meetings and–well–day job, but some interesting things occurred during the latest Integrated Program Management (IPMD) of the National Defense Industrial Association (NDIA) meeting that I think are of interest. (You have to love acronyms to be part of this community).

Program Management and Integrated Program Management

First off is the initiative by the Program Management Working Group to gain greater participation by program managers with an eye to more clearly define what constitutes integrated program management. As readers of this blog know, this is a topic that I’ve recently written about.

The Systems Engineering discipline is holding their 21st Annual Systems Engineering Conference in Tampa this year from October 22nd to the 25th. IPMD will collaborate and will be giving a track dedicated to program management. The organizations have issued a call for papers and topics of interest. (Full disclosure: I volunteered this past week to participate as a member of the PM Working Group).

My interest in this topic is based on my belief from my years of wide-ranging experience in duties from having served as a warranted government contracting officer, program manager, business manager, CIO, staff officer, and logistics officer that there is much more to the equation in defining IPM that transcends doing so through the prism of any particular discipline. Furthermore, doing so will require collaboration and cooperation among a number of project management disciplines.

This is a big topic where, I believe, no one group or individual has all of the answers. I’m excited to see where this work goes.

Integrated Digital Environment

Another area of interest that I’ve written about in the past involved two different–but related–initiatives on the part of the Department of Defense to collect information from their suppliers that is necessary in their oversight role not only to ensure accountability of public expenditures, but also to assist in project cost and schedule control, risk management, and assist in cost estimation, particularly as it relates to risk sharing cost-type R&D contracted project efforts.

Two major staffs in the Offices of the Undersecretary of Defense have decided to go with a JSON-type schema for, on the one hand, cost estimating data, and on the other, integrated cost performance, schedule, and risk data. Each initiative seeks to replace the existing schemas in place.

Both have been wrapped around the axle on getting industry to move from form-based reporting and data sharing to a data-agnostic solution that meet the goals of reducing redundancy in data transmission, reducing the number of submissions and data streams, and moving toward one version of truth that allows for SMEs on both sides of the table to concentrate on data analysis and interpretation in jointly working toward the goal of successful project completion and end-item deployment.

As with the first item, I am not a disinterested individual in this topic. Back when I wore a uniform I helped to construct DoD policy to create an integrated digital environment. I’ve written about this experience previously in this blog, so I won’t bore with details, but the need for data sharing on cost-type efforts acknowledges the reality of the linkage between our defense economic and industrial base and the art of the possible in deploying defense-related end items. The same relationship exists for civilian federal agencies with the non-defense portion of the U.S. economy. Needless to say, a good many commercial firms unrelated to defense are going the same way.

The issue here is two-fold, I think, from speaking with individuals working these issues.

The first is, I think, that too much deference is being given to solution providers and some industry stakeholders, influenced by those providers, in “working the refs” through the data. The effect of doing so not only slows down the train and protects entrenched interests, it also gets in the way of innovation, allowing the slowest among the group to hold up the train in favor of–to put it bluntly–learning their jobs on the job at the expense of efficiency and effectiveness. As I expressed in a side conversion with an industry leader, all too often companies–who, after all, are the customer–have allowed themselves to view the possible by the limitations and inflexibility of their solution providers. At some point that dysfunctional relationship must end–and in the case of comments clearly identified as working the refs–they should be ignored. Put your stake in the ground and let innovation and market competition sort it out.

Secondly, cost estimating, which is closely tied to accounting and financial management, is new and considered tangential to other, more mature, performance management systems. My own firm is involved in producing a solution in support of this process, collecting data related to these reports (known collectively in DoD as the 1921 reports), and even after working to place that data in a common data lake, exploring with organizations what it tells us, since we are only now learning what it tells us. This is classical KDD–Knowledge Discovery in Data–and a worthwhile exercise.

I’ve also advocated going one step further in favor of the collection of financial performance data (known as the Contract Funds Status Report), which is an essential reporting requirement, but am frustrated to find no one willing to take ownership of the guidance regarding data collection. The tragedy here is that cost performance, known broadly as Earned Value Management, is a technique related to the value of work performance against other financial and project planning measures (a baseline and actuals). But in a business (or any enterprise), the fuel that drives the engine are finance-related, and two essential measures are margin and cash-flow. The CFSR is a report of program cash-flow and financial execution. It is an early measure of whether a program will execute its work in any given time-frame, and provides a reality check on the statistical measures of performance against baseline. It is also a necessary logic check for comptrollers and other budget decision-makers.

Thus, as it relates to data, there has been some push-back against a settled schema, where the government accepts flat files and converts the data to the appropriate format. I see this as an acceptable transient solution, but not an ultimate one. It is essential to collect both cost estimating and contract funds status information to perform any number of operations that relate to “actionable” intelligence: having the right executable money at the right time, a reality check against statistical and predictive measures, value analysis, and measures of ROI in development, just to name a few.

I look forward to continuing this conversation.

To Be or Not to Be Agile

The Section 809 Panel, which is the latest iteration of acquisition reform panels, has recommended that performance management using earned value not be mandated for efforts using Agile. It goes on, however, to assert that program executive “should approve appropriate project monitoring and control methods, which may include EVM, that provide faith in the quality of data and, at a minimum, track schedule, cost, and estimate at completion.”

Okay…the panel is then mute on what those monitoring and control measure will be. Significantly, if only subtly, the #NoEstimates crowd took a hit since the panel recommends and specifies data quality, schedule, cost and EAC. Sounds a lot like a form of EVM to me.

I must admit to be a skeptic when it comes to swallowing the Agile doctrine whole. Its micro-economic foundations are weak and much of it sounds like ideology–bad ideology at best and disproved ideology at worst (specifically related to the woo-woo about self-organization…think of the last speculative bubble and resulting financial crisis and depression along these lines).

When it comes to named methodologies I am somewhat from Missouri. I apply (and have in previous efforts in the Dark Ages back when I wore a uniform) applied Kanban, teaming, adaptive development (enhanced greatly today by using modern low-code technology), and short sprints that result in releasable modules. But keep in mind that these things were out there long before they were grouped under a common heading.

Perhaps Agile is now a convenient catch-all for best practices. But if that is the case then software development projects using this redefined version of Agile deserve no special dispensation. But I was schooled a bit by an Agile program manager during a side conversation and am always open to understanding things better and revising my perspectives. It’s just that there was never a Waterfall/Agile dichotomy just as there never really was a Spiral/Waterfall dichotomy. These were simply convenient development models to describe a process that were geared to the technology of the moment.

There are very good people on the job exploring these issues on the Agile Working Group in the IPMD and I look forward to seeing what they continue to come up with.

Rip Van Winkle Speaks!

The only disappointing presentation occurred on the second and last day of the meeting. It seemed we were treated by a voice from somewhere around the year 2003 that, in what can only be described as performance art involving free association, talked about wandering the desert, achieving certification for a piece of software (which virtually all of the software providers in the room have successfully navigated at one time or another), discovering that cost and schedule performance data can be integrated (ignoring the work of the last ten years on the part of, well, a good many people in the room), that there was this process known as the Integrated Baseline Review (which, again, a good many people in the room had collaborated on to both define and make workable), and–lo and behold–the software industry uses schemas and APIs to capture data (known in Software Development 101 as ETL). He then topped off his meander by an unethical excursion into product endorsement, selected through an opaque process.

For this last, the speaker was either unaware or didn’t care (usually called tone-deafness) that the event’s expenses were sponsored by a software solution provider (not mine). But it is also as if the individual speaking was completely unaware of the work behind the various many topics that I’ve listed above this subsection, ignoring and undermining the hard work of the other stakeholders that make up our community.

On the whole an entertaining bit of poppycock, which leads me to…

A Word about the Role of Professional Organizations (Somewhat Inside Baseball)

In this blog, and in my interactions with other professionals at–well–professional conferences–I check my self-interest in at the door and publicly take a non-commercial stance. It is a position that is expected and, I think, appreciated. For those who follow me on social networking like LinkedIn, posts from my WordPress blog originate from a separate source from the commercial announcements that are linked to my page that originate from my company.

If there are exhibitor areas, as some conferences and workshops do have, that is one thing. That’s where we compete and play; and in private side conversations customers and strategic partners will sometimes use the opportunity as a convenience to discuss future plans and specific issues that are clearly business-related. But these are the exceptions to the general rule, and there are a couple of reasons for this, especially at this venue.

One is because, given that while it is a large market, it is a small community, and virtually everyone at the regular meetings and conferences I attend already know that I am the CEO and owner of a small software company. But the IPMD is neutral ground. It is a place where government and industry stakeholders, who in other roles and circumstances are in a contractual or competing relationship, come to work out the best way of hashing out processes and procedures that will hopefully improve the discipline of program and project management. It is also a place of discovery, where policies, new ideas, and technologies can be vetted in an environment of collaboration.

Another reason for taking a neutral stance is simply because it is both the most ethical and productive one. Twenty years ago–and even in some of the intervening years–self-serving behavior was acceptable at the IPMD meetings where both leadership and membership used the venue as a basis for advancing personal agendas or those of their friends, often involving backbiting and character assassination. Some of those people, few in number, still attend these meetings.

I am not unfamiliar with the last–having been a target at one point by a couple of them but, at the end of the day, such assertions turned out to be without merit, undermining the credibility of the individuals involved, rightfully calling into question the quality of their character. Such actions cannot help but undermine the credibility and pollute the atmosphere of the organization in which they associate, as well.

Finally, the companies and organizations that sponsor these meetings–which are not cheap to organize, which I know from having done so in the past–deserve to have the benefit of acknowledgment. It’s just good manners to play nice when someone else is footing the bill–you gotta dance with those that brung you. I know my competitors and respect them (with perhaps one or two exceptions). We even occasionally socialize with each other and continue long-term friendships and friendly associations. Burning bridges is just not my thing.

On the whole, however, the NDIA IPMD meetings–and this one, in particular–was a productive and positive one, focused on the future and in professional development. That’s where, I think, that as a community we need to be and need to stay. I always learn something new and get my dose of reality from a broad-based perspective. In getting here the leadership of the organization (and the vast majority of the membership) is to be commended, as well as the recent past and current members of the Department of Defense, especially since the formation of the Performance Assessments and Root Cause Analysis (PARCA) office.

In closing, there were other items of note discussed, along with what can only be described as the best pair of keynote addresses that I’ve heard in one meeting. I’ll have more to say about some of the concepts and ideas that were presented there in future posts.

The Song Remains the Same (But the Paradigm Is Shifting) — Data Driven Assessment and Better Software in Project Management

Probably the biggest news out of the NDIA IPMD meeting this past week was the unofficial announcement by Frank Kendall, who is the Undersecretary of Defense for Acquisition, Technology, and Logistics USD(AT&L), that thresholds would be raised for mandatory detailed surveillance of programs to $100M from the present requirement of $20M.  While earned value management implementation and reporting will still be required on programs based on dollar value, risk, and other key factors, especially the $20M threshold for R&D-type projects, the raising of the threshold for mandatory surveillance reviews was seen as good news all around for reducing some regulatory burden.  The big proviso in this announcement, however, was that it is to go into effect later this summer and that, if the data in reporting submissions show inconsistencies and other anomalies that call into question the validity of performance management data, then all bets are off and the surveillance regime is once again imposed, though by exception.

The Department of Defense–especially under the leadership of SecDef Ashton Carter and Mr. Kendall–has been looking for ways of providing more flexibility in acquisition to allow for new technology to be more easily leveraged into long-term, complex projects.  This is known as the Better Buying Power 3.0 Initiative.  It is true that surveillance and oversight can be restrictive to the point of inhibiting industry from concentrating on the business of handling risk in project management, causing resources to be devoted to procedural and regulatory issues that do not directly impact whether the project will successfully achieve its goals within a reasonable range of cost and schedule targets.  Furthermore, the enforcement of surveillance has oftentimes been inconsistent and–in the worst cases–contrary to the government’s own guidance due to inconsistent expertise and training.  The change maintains a rigorous regulatory environment for the most expensive and highest risk projects, while reducing unnecessary overhead, and allowing for more process flexibility for those below the threshold, given that industry’s best practices are effective in exercising project control.

So the question that lay beneath the discussion of the new policy coming out of the meeting was: why now?  The answer is that technology has reached the point where the ability to effectively use the kind of Big Data required by DoD and other large organizations to detect patterns in data that suggest systems issues has changed both the regulatory and procedural landscape.

For many years as a techie I have heard the mantra that software is a nice reporting and analysis tool (usually looking in the rear view mirror), but that only good systems and procedures will ensure a credible and valid system.  This mantra has withstood the fact that projects have failed at the usual rate despite having the expected artifacts that define an acceptable project management system.  Project organizations’ systems descriptions have been found to be acceptable, work authorization, change control, and control account plans, PMBs, and IMSs have all passed muster and yet projects still fail, oftentimes with little advance warning of the fatal event or series of events.  More galling, the same consultants and EVM “experts” can be found across organizations without changing the arithmetic of project failure.

It is true that there are specific causes for this failure: the inability of project leadership to note changes in framing assumptions, the inability of our systems and procedures to incorporate technical performance into overall indicators of project performance, and the inability of organizations to implement and enforce their own policies.  But in the last case, it is not clear that the failure to follow controls in all cases had any direct impact on the final result; they were contributors to the failure but not the main cause.  It is also true that successful projects have experienced many of the same discrepancies in their systems and procedures.  This is a good indication that something else is afoot: that there are factors not being registered when we note project performance, that we have a issue in defining “done”.

The time has come for systems and procedural assessment to step aside as the main focus of compliance and oversight.  It is not that systems and procedures are unimportant.  It is that data driver assessment–and only data driver assessment–that is powerful enough to quickly and effectively identify issues within projects that otherwise go unreported.  For example, if we call detailed data from the performance management systems that track project elements of cost, the roll up should, theoretically, match the summarized data at the reporting level.  But this is not always the case.

There are two responses to this condition.  The first is: if the variations are small; that is, within 1% or 2% from the actuals, we must realize that earned value management is a project management system, not a financial management systems, and need not be exact.  This is a strong and valid assertion.  The second, is that the proprietary systems used for reporting have inherent deficiencies in summarizing reporting.  Should the differences once again not be significant, then this too is a valid assertion.  But there is a point at which these assertions fail.  If the variations from the rollups is more significant than (I would suggest) about 2% from the rollup, then there is a systemic issue with the validity of data that undermines the credibility of the project management systems.

Checking off compliance of the EIA 748 criteria will not address such discrepancies, but a robust software solution that has the ability to handle such big data, the analytics to identify such discrepancies, and the flexibility to identify patterns and markers in the data that suggest an early indication of project risk manifestation will address the problem at hand.  The technology is now here to be able to perform this operation and to do so at the level of performance expected in desktop operations.  This type of solution goes far beyond EVM Tools or EVM engines.  The present generation of software possesses both the ability to hardcode solutions out of the box, but also the ability to configure objects, conditional formatting, calculations, and reporting from the same data to introduce leading indicators across a wider array of project management dimensions aside from just cost and schedule.

 

Days of Future Passed — Legacy Data and Project Parametrics

I’ve had a lot of discussions lately on data normalization, including being asked the question of what constitutes normalization when dealing with legacy data, specifically in the field of project management.  A good primer can be found at About.com, but there are also very good older papers out on the web from various university IS departments.  The basic principals of data normalization today consist of finding a common location in the database for each value, reducing redundancy, properly establishing relationships among the data elements, and providing flexibility so that the data can be properly retrieved and further processed into intelligence in such as way as the objects produced possess significance.

The reason why answering this question is so important is because our legacy data is of such a size and of such complexity that it falls into the broad category of Big Data.  The condition of the data itself provides wide variations in terms of quality and completeness.  Without understanding the context, interrelationships, and significance of the elements of the data, the empirical approach to project management is threatened, since being able to use this data for purposes of establishing trends and parametric analysis is limited.

A good paper that deals with this issue was authored by Alleman and Coonce, though it was limited to Earned Value Management (EVM).  I would argue that EVM, especially in the types of industries in which the discipline is used, is pretty well structured already.  The challenge is in the other areas that are probably of more significance in getting a fuller understanding of what is happening in the project.  These areas of schedule, risk, and technical performance measures.

In looking at the Big Data that has been normalized to date–and I have participated with others in putting a significant dent in this area–it is apparent that processes in these other areas lack discipline, consistency, completeness, and veracity.  By normalizing data in sub-specialties that have experienced an erosion in enforcing standards of quality and consistency, technology becomes a driver for process improvement.

A greybeard in IT project management once said to me (and I am not long in joining that category): “Data is like water, the more it flows downstream the cleaner it becomes.”  What he meant is that the more that data is exposed in the organizational stream, the more it is questioned and becomes a part of our closed feedback loop: constantly being queried, verified, utilized in decision making, and validated against reality.  Over time more sophisticated and reliable statistical methods can be applied to the data, especially if we are talking about performance data of one sort or another, that takes periodic volatility into account in trending and provides us with a means for ensuring credibility in using the data.

In my last post on Four Trends in Project Management, I posited that the question wasn’t more or less data but utilization of data in a more effective manner, and identifying what is significant and therefore “better” data.  I recently heard this line repeated back to me as a means of arguing against providing data.  This conclusion was a misreading of what I was proposing.  One level of reporting data in today’s environment is no more work than reporting on any other particular level of a project hierarchy.  So cost is no longer a valid point for objecting to data submission (unless, of course, the one taking that position must admit to the deficiencies in their IT systems or the unreliability of their data).

Our projects must be measured against the framing assumptions in which they were first formed, as well as the established measures of effectiveness, measures of performance, and measures of technical achievement.  In order to view these factors one must have access to data originating from a variety of artifacts: the Integrated Master Schedule, the Schedule and Cost Risk Analysis, and the systems engineering/technical performance plan.  I would propose that project financial execution metrics are also essential in getting a complete, integrated, view of our projects.

There may be other supplemental data that is necessary as well.  For example, the NDIA Integrated Program Management Division has a proposed revision to what is known as the Integrated Baseline Review (IBR).  For the uninitiated, this is a process in which both the supplier and government customer project teams can come together, review the essential project artifacts that underlie project planning and execution, and gain a full understanding of the project baseline.  The reporting systems that identify the data that is to be reported against the baseline are identified and verified at this review.  But there are also artifacts submitted here that contain data that is relevant to the project and worthy of continuing assessment, precluding manual assessments and reviews down the line.

We don’t yet know the answer to these data issues and won’t until all of the data is normalized and analyzed.  Then the wheat from the chaff can be separated and a more precise set of data be identified for submittal, normalized and placed in an analytical framework to give us more precise information that is timely so that project stakeholders can make decisions in handling any risks that manifest themselves during the window that they can be handled (or make the determination that they cannot be handled).  As the farmer says in the Chinese proverb:  “We shall see.”

Taking Chances — Elements Needed in a Good Project Manager

Completed the quarterly meeting of the NDIA Integrated Program Management Division, among other commitments the past couple of weeks (hence sparse blogging except on AITS.org), and was most impressed by a presentation given by Ed Miyashiro, Vice President of the Raytheon Company Evaluation Team (RCET).  I would say that these are the characteristics, which are those identified as essential in a successful project manager, are needed regardless of area of expertise, taking into account differences of scale.

Master Strategist – Ensures program survival and future growth

Disciplined Manager – Executes contracts under cost, ahead of schedule with technical excellence

Shrewd Business Person – Maximized financial objectives and minimized risk

Engaged Leader – Leads for success up, down and outside the organization

Relationship Cultivator – Maintains and grows relationships across the broad global customer business communities

The third element describes “minimized risk.”  This is different than risk aversion or risk avoidance.  All human efforts involve risk.  I believe that the key is to take educated risks, knowing the probable opportunities.

 

IMPish Grin — The Connection for Technical Measures (and everything else)

Glen Alleman at his Herding Cats blog has posted his presentation on the manner of integrating technical performance measures in a cohesive and logical manner with project schedule and cost measurement.  Many in the DoD and A&D-focused project community are aware of the work of many of us in this area (my own paper is posted on the College of Performance Management library page here) but the work of Alleman, Coonce, and Price take these concepts a step further.  I wrote an earlier post about the white paper but the presentation demonstrates clearly the flow of logic in constructing not only a model in which technical performance is incorporated into the project plan through measures of effectiveness that are derived from the statement of work, but then makes the connection to measures of progress and measures of performance, clearly outlining the proper integration of the core elements of project planning, execution, and control.

The key artifact that ties the essential elements together: cost, schedule, and technical performance; is the Integrated Master Plan (IMP).  The National Defense Industrial Association Integrated Program Management Division’s (NDIA IPMD) Planning and Scheduling Excellence Guide (PASEG) (link broken) seems to forget this essential step–the artifact that is necessary to allow for the construction of a valid Integrated Master Schedule (IMS).  I think part of the reason for the omission is the mistaken belief that this is an unnecessary artifact–that it is a “nice to have” if the program sponsor remembers to put it in the contact deliverables.  This makes its construction negotiable and vulnerable as a discretionary cost item, which it clearly is not–or, at least, should not be.  Even the Wikipedia entry is confused by the classification of the IMP, characterizing it first as primarily a DoD-specific artifact, a contractual artifact, and–oh, by the way–a necessary step in civic and urban planning (also known as construction project management).  The PASEG does mention summary schedules and perhaps in those rare cases, based on the work being performed and the contract type, some stripped down kind of IMP will do, but regardless of what it is called (and IMP still serves as a good shorthand) then the ability to trace measures of effectiveness to measures of progress is still needed in complex project management.

Thus the IMP is this: it is the fulcrum of integrated project management.  When tying the measures of effectiveness to specific tasks related to the WBS, it is the IMP that provides the roadmap to the working, day-to-day tools that will be used to measure progress–cost, schedule, and technical achievement and assessment against plan, all informed by risk.  For those of us in the technology community, continuing to sell discrete, swim-lane focused apps that do not support this construction are badly out of date.

Family Affair — Part I — Managerial Economics of Projects, Microeconomic Foundations, and Macro

A little more than a week ago I had an interesting conversation on a number of topics with colleagues in attending the National Defense Industrial Association Integrated Program Management Division (NDIA IPMD).  A continuation of one of those discussions ended up in the comments section of my post “Mo’Better Risk–Tournaments and Games of Failure Part II” by Mark Phillips.  I think it is worthwhile to read Mark’s comments because within them lie the crux of the discussion that is going on not only in our own community, but in the country as a whole, particularly in the economics profession, that will eventually influence and become public policy.*

The intent of my posts on tournaments and games of failure consisted of outlining a unique market variation (tournaments) and the high cost of failure (and entry), that results from this type of distortion.  I don’t want to misinterpret his remarks but he seems to agree with me on the critique but can’t think of an alternative, emphasizing that competitive markets seem to be the best system that we have come up with.  About this same time a good friend and colleague of mine spent much energy bemoaning the $17 trillion debt in light of the relatively small amounts of money that we seek to save in managing projects.  Both contentions fail on common logical fallacies, but I don’t want to end the conversation there, because I understand that they are speaking in shorthand in lieu of a formal syllogism.  Their remarks are worthy of further thought and elaboration, especially since they are held by a good many people who come from scientific, mathematical, engineering, and technical backgrounds.

I will take the last one first, which concerns macroeconomics, because you can’t understand micro without knowing and understanding the environment in which you operate.  Much of this understanding is mixed in with ideology, propaganda, and wishful thinking.  Bill Gross at Pimco is just the latest example of someone who decided to listen to the polemicists at CNBC and elsewhere–and put his investors’ money where his mouth was.  You have to admit this about the lords of finance–what they lack in knowledge they make up for in bluster, especially when handing out bad advice.

Along these lines, a lot of energy gets expended about federal debt.  There have been cases–which are unique and well studied where countries can hold too much debt, especially if their economic fundamentals show significant weakness, but a little common sense places things in perspective.  Of the $17 trillion in debt, a little over $12 trillion is held by the public in the form of bonds and $5+ trillion held by other government agencies, particularly the trust funds for things like Social Security.  The bonds are exactly the same: assets for an investment portfolio and assets for any other institution that holds them.  U.S. treasury bills are very safe investments and so are traded worldwide, even by other countries.  Some of this is often spun as a negative but given that the U.S. dollar and U.S. securities are deemed safe, it turns out that–short of us doing something stupid like defaulting on our obligations–the U.S. is a stabilizing force in the world economy.

So $12 trillion in debt held by the public is about 74% of Gross Domestic Product, that is, the value of goods produced in the United States in any given year.  This is about the same level that the debt stood relative to GDP around 1950.  In 2007 it stood at about 37% of GDP but we had this thing called the Great Recession (though for those of us who run a business we couldn’t quite tell the difference between it and a depression).  Regardless, the country didn’t go bankrupt in 1950 (or in the 1940s when the pubicly-held debt to GDP ratio was over 100%).  Great Britain didn’t go bankrupt during its period of hegemony with debt to GDP ratios much higher.  When making the comparison of government finances to households, folks like those at the Peterson-funded and Washington Post Fix the Debt crowd speak like Victorian moralists about how no responsible household would have garnered such debt.  Well, household debt in 2014 stands at $11.63 trillion.  Average credit card debt is about $15K and average mortgage debt is $153,500.  Then there are other types of debt, such as student loans, on top of that which averages about $35K per household.  Given that median U.S. household income is a little over $51K, the debt to income ratio of households is 400% of annual income.  Given that comparison our national finances are anything but profligate.** One could make a very good case that we are underinvesting in capital improvements that would contribute to greater economic growth and opportunity down the line.

But there is a good reason for the spike in national debt that we saw beginning in 2008.  In case you missed it, the housing bubble burst in 2007.  This caused an entire unregulated field of securities to become virtually worthless overnight.  The banking and insurance assets that backed them lost a great deal of value, homeowners lost equity in their homes, investors lost value in their funds, the construction industry and its dependencies then tanked since part of a large part of the bubble consisted not only of overvalued real property but extremely high vacancy rates brought on by overbuilding, bank lending seized up, businesses found themselves without liquidity, seeing the carnage around them people who had jobs tightened their belts and the savings rate spiked, and those who lost jobs tapped into savings and retirement funds, which lost a large part of their values.  The total effect was that the economy took a nose dive.  In all about $8 trillion of wealth was wiped out almost overnight.  Yes, those Wall Street, banking, and real estate self-proclaimed geniuses reading their Drucker, Mankiw, and Chicago School books (when not leafing The Fountainhead for leisure) managed to use other peoples’ money and–not only lose a good part of it but managed to sink the world economy.  Millions of people were thrown out of work and businesses–many of which were household names–closed for good.  People not only lost their jobs but also their homes, through no direct fault or negligence of their own.  Most of those who found new jobs were forced to work for much less money.  Though the value of their homes (the asset) fell significantly, the obligations for that asset under their mortgages remained unchanged.  So much for shared moral hazard in real estate finance.

Economic stabilizers that have been in place for quite some time (unemployment insurance, Food stamps, etc.) came into play at the same time that collections from taxes fell precipitously, since fewer people were making money.  The combination of the social insurance stabilizers and President Obama’s combination of new spending and tax cuts to provide a jolt of temporary stimulus–despite polemicists to the contrary–was just enough to stop the fall but was not enough to quickly reverse it.  Even with the additional spending, the ratio of debt to GDP would not have been so marked if the economy had not lost so much value.  But that is the point of stabilizers and stimulus.  At that point it just doesn’t matter as long as what you do stops the death spiral.

A lot of energy then gets expended at this point about private vs. public expenditures, who received or deserved a bailout, etc.  Rick Santelli–also one of the geniuses at CNBC who has been consistently wrong about just about everything–had his famous rant about bailing out “losers” (after the bankers and investors got their money) without blushing.  This is because, I think, that people–even those well educated–have been convinced that political economy is a “soft” science where preferences are akin to belief systems along the lines of astrology, numerology, and other forms of magical thinking.  You pick your side; like skins vs. shirts.

This kind of thinking cannot be more wrong.  It is wrong not only because our scientific methods have come along pretty far, but also because the “ideological” thinking that has been sold in regard to political economy undermines the ability of citizens in a democratic republic to understand their role in it.  “A nation is great, and can only be as great, as its rank and file,” said historian and later president, Woodrow Wilson.  This proposition is as true today as it was a hundred years ago.  More urgently, it is wrong because after the world’s experience with disastrous semi-religious ideologies and cults of personality in the 20th century capped off by Radical Islam at the start of our own century, the last thing we need is more bigotry and stupidity that demands sacrifice and revolution, harming millions in the process, in the name of some far off, Utopian future or to regain some non-existent idealized past.

On the everyday level, it is important to end the magical thinking in this area because that is the only way for those of us who are not the masters of the universe–with a billion or so in the bank with politicians and media clowns willing to backstop our bad decisions–can survive.

Fallacy Number One: our economic structure is based on “private enterprise” with public action an imposition on that system

I begin refuting this fallacy with this image:

U.S. dollar

Notice that our currency is issued by a central bank.  This central bank is the Federal Reserve.  George Washington, our first President, is on the $1 bill.  Other dead presidents are on our other denominations.  On the front of the dollar bill it clearly states “The United States of America.”  That is because U.S. currency and the economy on which it is built is a construction of the U.S. government.  The rules that govern market behavior are established within guidelines prescribed by the government of the United States or the various states.  The People of the United States, as in “We the People of the United States…” that begins the Preamble to the Constitution, establish the currency and good faith and credit of the country.  J.P. Morgan Chase, Bill Gates, the Koch brothers, Bitcoin, and every other participant in the economy are subject to the will of the sovereign–in this case the People of the United States–that establishes this currency.

This is important to know when talking about such things like the debt.  For example, once the economy is back to growing at trend, should we still consider the debt level too high, we can marginally raise taxes to balance the budget and even pay down the debt like we were doing just 13 years ago.  It is also important because it allows us to use our critical thinking skills even though we may not be professional economists when faced by specious claims, like that of the debunked study by econmists Reinhart and Rogoff that purportedly showed a link to economic stagnation when countries exceeded 90% debt to GDP ratio.  For example, since the Federal Reserve is the central bank, should we find that there is a magic point at which we are concerned about debt as a percent of GDP, the Fed can buy its own bonds back at low prices that it previous sold at higher prices, thereby reducing the debt to GDP ratio simply by swapping paper.  Of course this would be ridiculous.  The important metric is understanding if we can make the annual payments and the percent of interest against GDP.

Understanding this essential nature of the economy in which we operate, combined with our critical thinking skills, can also inform us regarding why the Fed bought securities to provide cash to the economy–what is known as quantitative easing (QE).  Paul Krugman, the Nobel economist and New York Times columnist, has also posted some useful slides here.  This was done in several stages by the Federal Reserve because the economy was in what is called a liquidity trap with short term interest rates near zero.  Thus, there were not enough liquid assets (cash), to keep businesses and banking going.  The housing market and other businesses dependent on liquidity and low interest rates were also deeply depressed.  In project management, start-up costs must oftentimes be financed.  Businesses don’t have a vault of money sitting around just in case they get that big contract.  The bank of last resort–the Fed–used its authority to buy up existing bonds to prime the pump.  Rather than some unheard of government intervention in the “private” economy, the Fed did its job.

As to political economy, Thomas Paine, the publicist of the American Revolution, put it best in Agrarian Justice, and the common sense that he expressed over two hundred years ago is still true today:  “Personal property is the effect of society; and it is as impossible for an individual to acquire personal property without the aid of society, as it is for him to make land originally…Separate an individual from society, and give him an island or a continent to possess, and he cannot acquire personal property. He cannot be rich.”  Private property in the definition of Paine and Adam Smith, who was a contemporary, is defined as real property or the means of production, not private possessions.  We are long past the agrarian economies observed by Paine and Smith, where wealth was defined by land holdings.  But Smith did observe in one mention in The Wealth of Nations that the systems he observed–those markets that existed in his day–acted as if controlled by an “invisible hand.”  Much cult-like malarkey has been made of this line but what he was describing is what we now know of as systems theory or systems engineering.

Given this understanding, one can then make two essential observations.

First, that our problems where bogeyman numbers are used ($17 trillion!) aren’t so scary when one considers that we are a very large and very rich country.  We can handle this without going off the rails.  Is there a point where annual deficits are “bad” from a systems perspective?  Yes, and we can measure these effects and establish models to inform our decision making.  Wow–this sounds a lot like project management but on a national scale, with lawyers, lobbyists, and politicians involved to muck things up and muddy the waters.

The other observation is that we can also see how government policy was responsible for the manner in which it exposed its manufacturing workers to foreign competition, undercutting the power of domestic unions, not some natural order dictated by “globalization.”  It demonstrates how the shrinking of the middle class since 1980 was also the result of government action, and how the slow recovery, with its lack of emphasis on either job creation or wage protection, was also the result of government action.

Those who throw up their hands and say that there is nothing to be done because the rich always find a way to avoid responsibility and accountability not only are wrong from an historical and legal perspective, they also commit the sin of an act of omission, which is unforgivable.  That is, when you let something bad happen due to cynicism, indecisiveness, or apathy.

This then leads us to Fallacy Number Two:  Capitalism Is Necessarily Complimentary to Competition and Democracy

Mark Phillips’ on-line comment paraphrased Churchill’s observation that our system is one that, while imperfect, is the best we’ve found.  But Churchill was not speaking of free market fundamentalism or our current version of capitalism.  The actual quote is: “Democracy is the worst form of government, except for all those other forms that have been tried from time to time.”  (Churchill, House of Commons, November 11, 1947).  It is hard to believe today that the conflict in Western thought before the Second World War, and the topic on which Churchill was reflecting, was not between Democracy and Totalitarianism and its ilk.  For most Europeans, as documented extensively in Tony Judt’s magisterial work Postwar: A History of Europe Since 1945 (2005), the choice was seen as being between Fascism and Communism, liberal democracy being viewed as largely ineffective.

Furthermore, the democracy to which Churchill was referring in both the United States and the United Kingdom at the time, was much more closely organized around social democracy, where the economic system is subject to the goals of democratic principles.  This was apart from the differences in the types of democracy each was organized:  the U.S. on a constitutional, representative bicameral legislature, and presidential system of checks and balances; and the U.K. on a parliamentary, constitutional monarchy.  The challenge in 1947 was rebuilding the Western European countries, and those on the Asian periphery, to be self-sustaining and independent based on democratic principles and republican virtues as a counter to Soviet (and later Chinese) domination.  The discussion–and choice–had thus shifted from systems that assumed that people were largely economic actors where the economic system dictated the terms of the political system, to one that where a political system based on natural rights and self-government dictated the terms of the economic system.

Mark comments that competition is the best system that we have found in terms of economics, and I don’t necessary disagree.  But my level of agreement or disagreement depends on how we define competition and where we find it, whether it approximates what we call capitalism, and how that squares against democratic processes and republican institutions.

For example, the statement as it is usually posited also assumes that the “market picks winners” in some sort of natural selection.  Aside from committing the logical fallacy of the appeal to nature, it is also a misunderstanding of the nature of markets, how they work, and what they do.  As a government contract negotiator, contracting officer, and specialist for a good part of my career, understanding markets is the basis of acquisition strategy.

Markets set prices and, sometimes, they can also reflect consumer preferences.  In cases where competition works effectively, prices are driven to a level that promotes efficiency and the drive for newer products that, consequently, produce lower prices or greater value to the consumer.  Thus, competition, where it exists, provides the greatest value to the greatest number of people.  But we know this is not always the case in practice.  Also, just to be clear, what markets do not do is naturally elect someone, reward merit, define “makers” or “takers,” nor select the best ideas or the most valid ones.  It often doesn’t even select the best product among a field of competitors.  Markets focus on price and value, and they don’t even do that perfectly.

The reason for this is because there are no perfect markets.  Under classical economics it is assumed that the consumer has all of the information he or she needs in order to make a rational decision and that no supplier or buyer can dictate the terms of the market.  But no one has complete information.  Most often they don’t even have sufficient information to make a completely rational selection.  This was von Hayek’s insight in his argument against central planning before we discovered its tangible evils under both Soviet and Chinese communism.  This same insight speaks against monopoly and domination of a market by private entities.  This is called information economics.

Given that there is no such thing as a perfectly competitive market (since we do not live in a universe that allows for perfection), information economics has documented that the relationship among the independent players in a market is asymmetrical.  Some people have more information than others and will try to deny that information to others.  Technology is changing the fundamentals of information economics.

Compounding reality is that there are also different levels of competition that define the various markets, depending on vertical, product, industry, niche, etc. in the United States.  Some approximate competitive environments, some are monopolistic and others oligopolistic.  There can also be monopolistic competition among a few large firms.  Markets where competition is deemed destructive to the public interest (predatory competition) or is a result of a limited market for public goods (monopsony) are usually highly regulated.  How these markets develop is documented in systems theory.  For example, many markets start out competitive but a single market actor or limited set of actors are able to drive out competitors and then use their market power to dictate terms.  Since we operate in a political economy, rent-seeking behavior (seeking government protection through patent, intellectual property, copyright, and other monopolies as well as subsidies) is common and is probably one of the most corrupting influences in our political system today.

Thus, there is a natural conflict between our democratic principles and an economic system, which is established by the political system, that is based on economic rewards meted out in a hierarchical structure based on imperfect markets and rent-seeking.  This is why capitalism can morph itself and coexist in Leninist China and Autocratic Russia.  Given this natural conflict our institutions have passed laws that modify and regulate markets to make them behave in a manner that serves the public and ensures the positive benefits of competitive markets.  They have also passed laws that play into rent-seeking behavior and encourage wealth concentration.

A good example of both types of law and the conflict outlined above is the U.S. health care system and the Affordable Care Act (that I’ve previously written about), also known as Obamacare.  There are several aspects of the new law, and sections of the omnibus bill that passed under its rubric read almost as if they were separate laws with conflicting goals.

For example, the ACA, under which it is also known, established the healthcare exchanges on which plans could be purchased from private insurance providers.  This aspect of the law set up a competitive marketplace with information about each plan clearly provided to the consumer.  In addition, the ACA passed what had previously been known as the Patient’s Bill of Rights, which established minimal levels of service and outlawed some previously predatory and unethical practices.  This structure is the real world analogue of a competitive market that is regulated to outlaw abusive practices.

At the same time other portions of the bill prohibited the federal government from using its purchasing power to get the best price for prescription drugs, and also prohibited competition from Canadian drugs.  This is the analogue of rent-seeking.  When combined with laws that establish drug patent monopolies which allow companies to keep prices 300 to 400 percent above marginal cost, it is no wonder why per capita expenditures on healthcare are almost twice any other developed nation, though the cost seems to be coming down as a result of the competitive market reforms from the healthcare exchanges.

Revisiting our discussion earlier on debt, were our healthcare expenditures in line with other developed countries, we would be seeing budget surpluses well into the future.  The main driver of deficits is largely centered in Medicare.  Aside from cost cutting, other methods would be to expand, instead of shrinking, the pool of middle class workers which make up the broadest and largest source of revenue, with wages and salaries at least keeping pace with productivity gains.

In sum, competition is a useful tool and delegation of economic decisions is largely in line with our republican virtues.  But, I think, it is clear that there are hideous market distortions and imperfections that, in the end, undermine competition.  Many of these distortions and imperfections come about from competitive markets, which are then undermined once a market entity has gained control or undue influence.  Systems theory inform us about how markets behave and how we can regulate them to maximize the benefits of competition.

But the obscene fortunes that are held by a very small percentage of individuals–and the power that attends to them–represents in very real terms a danger to the institutions that we value.  So whether we can think of a better system is not the issue, taking incremental steps to reestablish republican virtues and democratic values is imperative.

Fallacy Number Three:  Microfoundations Determine Macro

Given the multiplicity of markets–and the mathematics and modeling that attend systems theory–it is clear that aggregation of microeconomic dynamics will not explain macroeconomic behavior–at least not as it is presently understood and accepted by the academic field.  Economics is a field that need not be “soft,” but which failed miserably to anticipate the housing bubble and resulting bursting of that bubble, and the blind alleys that some economists advocated that misled policy makers in the wake of the crisis exacerbated human suffering.  Europe is still under the thumb of German self-interest and an “expansionary austerity” ideology that resists empirical evidence.  Apparently 20% unemployment is just the corrective that peripheral countries need despite Germany’s own experience of the consequences of such policies in the 1930s–and extremist parties are rising across Europe as a result.

As a complex adaptive system, macroeconomic policies changes the behavior, structure, and terms of a market.  For example, ignoring antitrust legislation and goals to allow airlines and cable companies to consolidate encourages rent-seeking.  “Deregulating” such industries establish oligopolistic and monopolistic markets.  These markets dictate the behavior of the entities that operate in it, not the opposite way around.  The closed-loop behavior of this system then becomes apparent: successful rent-seeking encourages additional rent-seeking.  The consumer is nowhere in sight in this scenario.

Thus, we can trace the macro behavior of each system and then summarize them to understand the economy as a whole.  But this is a far cry from basing these systems on the behavior and planning of the individual entities at the microeconomic level.  Our insights from physics and tracing other complex systems, including climate, inform our ability to see that macro behavior can be summarized given the right set of variables, traced at the proper level of detail.

This then leads us to the fundamentals of the Managerial Economics of Projects, which I will summarize in my next post.

 

*I usually try to steer from these types of posts, especially since those that skirt politics and polemics tend to be contentious, but the topic is too important in understanding the area of managerial economics that involves projects and systems dynamics.  A blog, after all, is a public discussion, not a submission of an academic paper.  For those unfamiliar, my educational background is based in political science, economics, and business (undergraduate work and degree), and my graduate work and degrees focused on world and American history, business, and organizational behavior.  This is apart from my professional and other activities in software engineering, systems engineering, project management, group psychology, and the sciences, including marine biology.

**So the reader will not be scared of the big number for household income to debt ratios, keep in mind that the largest of these liabilities (and assets) is long-term.  For most mortgages this is 30 years.

Frame by Frame: Framing Assumptions and Project Success or Failure

When we wake up in the morning we enter the day with a set of assumptions about ourselves, our environment, and the world around us.  So too when we undertake projects.  I’ve just returned from the latest NDIA IPMD meeting in Washington, D.C. and the most intriguing presentation at the meeting was given by Irv Blickstein regarding a RAND root cause analysis of major program breaches.  In short, a major breach in the cost of a program is defined by the Nunn-McCurdy amendment that was first passed in 1982, in which a major defense program breaches its projected baseline cost by more than 15%.

The issue of what constitutes programmatic success and failure has generated a fair amount of discussion among the readers of this blog.  The report, which is linked above, is full of useful information regarding Major Defense Acquisition Program (also known as MDAP) breaches under Nunn-McCurdy, but for purposes of this post readers should turn to page 83.  In setting up a project (or program), project/program managers must make a set of assumptions regarding the “uncertain elements of program execution” centered around cost, technical performance, and schedule.  These assumptions are what are referred to as “framing assumptions.”

A framing assumption is one in which there are signposts along the way to determine if an assumption regarding the project/program has changed over time.  Thus, according to the authors, the precise definition of a framing assumption is “any explicit or implicit assumption that is central in shaping cost, schedule, or performance expectations.”  An interesting aspect of their perspective and study is that the three-legged stool of program performance relegates risk to serving as a method that informs the three key elements of program execution, not as one of the three elements.  I have engaged in several conversations over the last two weeks regarding this issue.  Oftentimes the question goes: can’t we incorporate technical performance as an element of risk?  Short answer:  No, you can’t (or shouldn’t).  Long answer: risk is a set of methods for overcoming the implicit invalidity of single point estimates found in too many systems being used (like estimates-at-complete, estimates-to-complete, and the various indices found in earned value management, as well as a means of incorporating qualitative environmental factors not otherwise categorizable), not an element essential to defining the end item application being developed and produced.  Looked at another way, if you are writing a performance specification, then performance is a key determinate of program success.

Additional criteria for a framing assumption are also provided in the RAND study.  These are that the assumptions must be determinative, that is, the consequences of the assumption being wrong significantly affects the program in an essential way.  They must also be unmitigable, that is, the consequences of the assumption being wrong are unavoidable.  They must be uncertain, that is, the outcome or certainty of it being right or wrong cannot be determined in advance.  They must be independent and not dependent on another event or series of events.  Finally, they must be distinctive, in setting the program apart from other efforts.

RAND then applied the framing assumption methodology to a number of programs.  The latest NDIA meeting was an opportunity to provide an update of conclusions based on the work first done in 2013.  What the researchers found was that framing assumptions which are kept at a high level, be developed early in a program’s life cycle, and should be reviewed on a regular basis to determine validity.  They also found that a program breached the threshold when a framing assumption became invalid.  Project and program managers, and requirements personnel have at least intuitively known this for quite some time.  Over the years, this is the reason given for requirements changes and contract modifications over the course of development that result in cost, performance, and schedule impacts.

What is different about the RAND study is that they have outlined a practical process for making these determinations early enough for a project/program to be adjusted with changing circumstances.  For example, the numbers of framing assumptions of all MDAPs in the study could be boiled down to four or five, which are easily tested against reality during the milestone and other reviews held over the course of a program.  This is particularly important given the lengthened time-frames of major acquisitions from development to production.

Looking at these results, my own observation is that this is a useful tool for identifying course corrections that are needed before they manifest into cost and schedule impacts, particularly given that leadership at PARCA has been stressing agile acquisition strategies.  The goal here, it seems, is to allow for course corrections before the inertia of the effort leads to failure or–more likely–the development and deployment of an end item that does not entirely meet the needs of the Defense Department.  (That such “disappointments” often far outstrip the capabilities of our adversaries is a topic for a different post).

I think the court is still out on whether course corrections, given the inertia of work and effort already expended at the point that a framing assumption would be tested as invalid, can ever truly be offsetting to the point of avoiding a breach, unless we then rebrand the existing effort as a new program once it has modified its structure to account for new framing assumptions.  Study after study has shown that project performance is pretty well baked in at the 20% mark.  For MDAPs, much of the front-loaded efforts in technology selection and application have been made.  After all, systems require inputs and to change a system requires more inputs, not less, to overcome the inertia of all of the previous effort, not to mention work in progress.   This is basic physics whether we are dealing with physical systems or complex adaptive (economic) systems.

Certainly, more efficient technology that affects the units of measurement within program performance can result in cost savings or avoidance, but that is usually not the case.  There is a bit of magical thinking here: that commercial technologies will provide a breakthrough to allow for such a positive effect.  This is an ideological idea not borne out by reality.  The fact is that most of the significant technological breakthroughs we have seen over the last 70 years–from the microchip to the internet and now to drones–have resulted from public investments, sometimes in public-private ventures, sometimes in seeded technologies that are then released into the public domain.  The purpose of most developmental programs is to invest in R&D to organically develop technologies (utilizing the talents of the quasi-private A&D industry) or provide economic incentives to incorporate technologies that do not currently exist.

Regardless, the RAND study has identified an important concept in determining the root causes of overruns.  It seems to me that a formalized process of identifying framing assumptions should be applied and done at the inception of the program.  The majority of the assessments to test the framing assumptions should then need to be made prior to the 20% mark as measured by program schedule and effort.  It is easier and more realistic to overcome the bow-wave of effort at that point than further down the line.

Note: I have modified the post to clarify my analysis of the “three-legged stool” of program performance in regard to where risk resides.

I Can See Clearly Now (The Risk Is Gone) — Managing and Denying Risk in PM

Just returned from attending the National Defense Industrial Association’s Integrated Program Management Division (NDIA IPMD) quarterly meeting.  This is an opportunity for both industry and government to share common concerns and issues regarding program management, as well as share expertise and lessons learned.

This is one among a number of such forums that distinguishes the culture in aerospace and defense in comparison to other industry verticals.  For example, in the oil and gas industry the rule of thumb is to not share such expertise across the industry, except in very general terms through venues such as the Project Management Institute, since the information is considered proprietary and competition sensitive.  I think, as a result, that the PM discipline suffers for this lack of cross pollination of ideas, resulting in an environment where, in IT infrastructure, the approach is toward customization and stovepipes, resulting in a situation where solutions tend to be expensive, marked by technological dead ends and single point failures, a high rate of IT project failure, and increased expense.

Among a very distinguished venue of project management specialists, one of the presentations that really impressed me by its refreshingly candid approach was given by Dave Burgess of the U. S. Navy Naval Air Systems Command (NAVAIR) entitled “Integrated Project Management: ‘A View from the Front Line’.”  The charts from his presentation will be posted on the site (link in the text on the first line).  Among the main points that I took from his presentation are:

a.  The time from development to production of an aircraft has increased significantly from the 1990s.  The reason for this condition is implicit in the way that PM is executed.  More on this below in items d and e.

b.  FY 2015 promises an extremely tight budget outlook for DoD.  From my view given his chart it is almost as if 2015 is the budgetary year that Congress forgot.  Supplemental budgets somewhat make up for the shortfalls prior to and after FY 2015, but the next FY is the year that the austerity deficit-hawk pigeons come home to roost.  From a PM perspective this represents a challenge to program continuity and sustainability.  It forces choices within program that may leave the program manager with a choice of the lesser of two evils.

c.  Beyond the standard metrics provided by earned value management that it is necessary for program and project managers to identify risks, which requires leading indicators to inform future progress.

This is especially important given the external factors of items a and b above.  Among his specific examples, Mr. Burgess demonstrated the need for integration of schedule and cost in the development of leading indicators.  Note that I put schedule ahead of cost in interpreting his data, and in looking at his specific examples there was an undeniable emphasis on the way in which schedule drives performance, given that it is a measure of the work that needs to be accomplished with (hopefully) an assessment of the resources necessary to accomplish the tasks in that work.  For example, Mr. Burgess demonstrated the use of bow waves to illustrate that the cumulative scope of the effort as the program ramps over time up will overcome the plan if execution is poor.  This is a much a law of physics as any mathematical proof.  No sky-hooks exist in real life.

From my perspective in PM, cost is a function of schedule.  All too often I have seen cases where the project management baseline (PMB) is developed apart from and poorly informed by the integrated master schedule (IMS).  This is not only foolhardy it is wrong.  The illogic of doing so should be self-evident but the practice persists.  It mostly exists because of the technological constraints imposed by the PM IT systems being stovepiped, which then drive both practice and the perception of what industry views is possible.

Thus, this is not an argument in favor of the status quo, it is, instead, an argument to dump the IT tool vendor refusing to update their products whose only interest is to protect market share and keep their proprietary solution sticky.  The concepts of sunk costs vs. prospective costs are useful in this discussion.  Given the reality of the tight fiscal environment in place and greater constraints to come, the program and project manager is facing the choice of paying recurring expenses for outdated technologies to support their management systems, or selecting and deploying new ones that will reduce their overheads and provide better and quicker information.  This allows them to keep people who, despite the economic legend that robots are taking our jobs, still need to make decisions in effectively managing the program and project.  It takes a long time and a lot of money to develop an individual with the skills necessary to manage a complex project of the size discussed by Mr. Burgess, while software technology generations average two years.  I’d go with keeping the people and seeking new, innovative technologies on a regular basis, since the former will always be hard and expensive (if done right) and the latter for the foreseeable future will continue on a downward cost slope.  I’ll expand on this in a later post.

d.  There is a self-reinforcing dysfunctional systemic problem that contributes to the condition described in item “a,” which is the disconnect between most likely estimates of the cost of a system and the penchant for the acquisition system to award based on a technically acceptable approach that is the lowest bid.  This encourages unrealistic expectations in forming the plan once the contract is awarded, which eventually is modified, through various change rationales, that tend to bring the total scope back to the original internal most-likely estimate.  Thus, Procuring Contracting Officers (PCOs) are allowing contractors to buy-in, a condition contrary to contracting guidance, and it is adversely affecting both budget and program planning.

e.  That all too often program managers spend time denying risk in lieu of managing risk.  By denying risk, program and project managers focus on a few elements of performance that they believe give them an indication of how their efforts are performing.  This perception is reinforced by the limited scope of the information looked at by senior personnel in the organization in their reports.  It is then no surprise that there are “surprises” when reality catches up with the manager.

It is useful to note the difference between program and project management in the context of the A&D vertical.  Quite simply, in this context, a program manager is responsible for all of the elements of the system being deployed.  For the U.S. Navy this includes the entire life-cycle of the system, including the logistics and sustainment after deployment.  Project management in this case includes one element of the system; for example, development and production of a radar, though there are other elements of the program in addition to the radar.  My earlier posts on the ACA program–as opposed to the healthcare.gov site–is another apt example of these concepts in practice.

Thus, program managers, in particular, need information on all of the risks before them.  This would include not only cost and schedule risk, which I would view as project management level indicators, but also financial and technical risk at the program level.  Given the discussions this past week, it is apparent that our more familiar indicators, while useful, require a more holistic set of views that both expand and extend our horizon, while keeping that information “actionable.”  This means that our IT systems used to manage our business systems require more flexibility and interoperability in supporting the needs of the community.