Ch- Ch- Changes–What I Learned at the NDIA IPMD Meeting and Last Thoughts on POGO DCMA

Hot Topics at the National Defense Industrial Association’s Integrated Program Management Division (NDIA-IPMD)

For those of you who did not attend, or who have a passing interest in what is happening in the public sphere of DoD acquisition, the NDIA IPMD meeting held last week was a great importance. Here are the highlights.

Electronic Submission of Program Management Information under the New Proposed DoD Schema

Those who have attended meetings in the past, and who read this blog, know where I stand on this issue, which is to capture all of the necessary information that provides a full picture of program and project performance among all of its systems and subsystems, but to do so in an economically feasible manner that reduces redundancy, reduces data streams, and improves timeliness of submission. Basic information economics state that a TB of data is only incrementally more expensive–as defined by the difference in the electricity generated–as a MB of data. Basic experience in IT management demonstrates that automating a process that eliminates touch labor in data production/validation improves productivity and speed.

Furthermore, if a supplier in complex program and project management is properly managing–and has sufficient systems in place–then providing the data necessary for the DoD to establish accountability and good stewardship, to ensure that adequate progress is being made under the terms of the contract, to ensure that contractually required systems that establish competency are reliable and accurate, and to utilize in future defense acquisition planning–should not be a problem. We live in a world of 0s and 1s. What we expect of our information systems is to do the grunt work handling ever large systems in providing information. In this scenario the machine is the dumb one and the person assessing the significance and context of what is processed into intelligence is the smart one.

The most recent discussions and controversies surrounded the old canard regarding submission at Control Account as opposed to the Work Package level of the WBS. Yes, let’s party like it’s 1997. The other issue was whether cumulative or current data should be submitted. I have issues with both of these items, which continue to arise like bad zombie ideas. You put a stake in them, but they just won’t die.

To frame the first issue, there are some organizations/project teams that link budget to control account, and others to work package. So practice is the not determinant, but it speaks to earned value management (EVM).The receiving organization is going to want the lowest level for reporting where there is foot-and-tie to not only budget, but to other systems. This is the rub.

I participated in an still-unpublished study for DoD that indicated that if one uses earned value management (EVM) exclusively to manage that it doesn’t matter. You get a bit more fidelity and early warning at the work package level, but not much.

But note my conditional.

No one exclusively uses EVM to manage projects and programs. That would be foolish and seems to be the basis of the specious attack on the methodology when I come upon it, especially by baby PMs. The discriminator is the schedule, and the early warning is found there. The place where you foot-and-tie schedule to the WBS is at the work package level. If you are restricted to the control account for reporting you have a guessing game–and gaming of the system–given that there will be many schedule activities to one control account.

Furthermore, the individual reviewing EVM and schedule will want to ensure that the Performance Measurement Baseline (PMB) and the Integrated Master Schedule (IMS) were not constructed in isolation from one another. There needs to be evidence that the work planned under the cost plan matches the work in time.

Regarding cumulative against current dollar submission the issue is one of accuracy. First, consecutive cumulative submissions require that the latest figure be subtracted from the last, which causes round-up errors–which are exacerbated if reporting is restricted to the control account level. NDIA IPMD had a long discussion on the intrinsic cumulative-to-cumulative error at a meeting last year, which was raised by Gary Humphreys of Humphreys & Associates. Second, cumulative submissions often hide retroactive changes. Third, to catch items in my second point, one must execute cross checks for different types of data, rather than getting a dump from the system of record and rolling up. The more operations and manipulation made to data, the harder it becomes to ensure fidelity and get everyone to agree on one trusted source, that is, in reading off of the same page.

When I was asked about my opinion on these issues, my response was twofold. First, as the head of a technology company it doesn’t matter to me. I can handle data in accordance with the standard DoD schema in any way specified. Second, as a former program management type and as an IT professional with an abiding dislike of inefficient systems, the restrictions proposed are based on the limitations of proprietary systems in use by suppliers that, in my opinion, need to be retired. The DoD and A&D market is somewhat isolated from other market pressures, by design. So the DoD must artificially construct incentives and an ecosystem that pushes businesses (and its own organizations) to greater efficiency and innovation. We don’t fly F-4s anymore, so why continue to use IT business systems designed in 1997 that are solely supported by sunk-cost arguments and rent seeking behavior?

Thus, my recommendation was that it was up to the DoD to determine the information required to achieve their statutory and management responsibilities, and it is up to the software solution providers to provide the, you know, solutions that meet them.

I was also asked if I agreed with another solution provider to have the software companies have another go at the schema prior to publication. My position was consistent in that regard: we don’t work the refs. My recommendation to OSD, also given that I have been in a similar position regarding an earlier initiative long the same lines back when I wore a uniform, is to explore the art of the possible with suppliers. The goals are to reduce data streams, eliminate redundancy, and improve speed. Let the commercial software guys figure out how to make it work.

Current projection is three to four weeks before a final schema is published. We will see if the corresponding documentation will also be provided simultaneously.

DCMA EVAS – Data-driven Assessment and Surveillance

This is a topic for which I cannot write without a conflict of interest since the company that is my day job is the solution provider, so I will make this short and sweet.

First, it was refreshing to see three Hub leads at the NDIA IPMD meeting. These are the individuals in the field who understand the important connection between government acquisition needs and private industry capabilities in the logistics supply chain.

Second, despite a great deal of behind-the-scenes speculation and drama among competitors in the solution provider market, DCMA affirmed that it had selected its COTS solution and that it was working with that provider to work out any minor issues now that MIlestone B has been certified and they are into full implementation.

Third, DCMA announced that the Hubs would be collecting information and that the plan for a central database for EVAS that would combine other DoD data has been put on hold until management can determine the best course for that solution.

Fourth, the Agency announced that the first round of using the automated metrics was later this month and that effort would continue into October.

Fifth, the Agency tamped down some of the fear related to this new process, noting that tripping metrics may simply indicate that additional attention was needed in that area, including those cases where it simply needed to be documented that the supplier’s System Description deviated from the standard indicator. I think this will be a process of familiarization as the Hubs move out with implementation.

DCMA EVAS, in my opinion, is a significant reform of the way the agency does business. It not only drives process and organizational improvement within the agency by eliminating uneven and arbitrary determinations of contract non-compliance (as well as improvements in data management), but opens a dialogue regarding systems improvement, driving similar changes to the supplier base.

NDAA Section 804

There were a couple of public discussions on NDAA Section 804 which, if you are not certain what it is, should go to this link. Having kept track of developments in the NDAA for this coming fiscal year, what I can say is that the final language of Section 804 doesn’t say what many think it says when it was in draft.

What it doesn’t authorize is a broad authority to overrule other statutory requirements for government accountability, oversight, and reporting, including the requirement for earned value management on large programs. This statement is supported by both OSD speakers that addressed the issue in the meeting.

The purpose of Section 804 was to provide the ability to quickly prototype and field new technologies in the wake of 911, particularly as it related to identifying, tracking, and preventing terrorist acts. But the rhetoric behind this section, which was widely touted by elected representatives long before the final version of the current NDAA had been approved, implied a broader mandate for more prosaic acquisitions. My opinion in having seen programs like this before (think Navy A12 program) is that, if people use this authority too broadly that we will be discussing more significant issues than a minor DCMA program that ends this blog post.

Thus, the message coming from OSD is that there is no carte blanche get-out-of-jail card for covering yourself under Section 804 and deciding that lack of management is a substitute for management, and that failure to obtain timely and necessary program performance information does not mean that it cannot be forensically traced in an audit or investigation, especially if things go south. A word to the wise, while birds of a feather catch cold.

DoD Reorganization

The Department of Defense has been undergoing reorganization and the old Office of the Undersecretary of Defense for Acquisition, Technology, and Logistics (OUSD (AT&L) has been broken up and reassigned largely to a new Undersecretary of Defense for Acquisition and Sustainment (USD (A&S).

As a result of this reorganization there were other points indicated:

a. Day-to-day program management will be pushed to the military services. No one really seems to understand what this means. The services already have PMOs in place that do day-to-day management. The policy part of old AT&L will be going intact to A&S as well as program analysis. The personnel cuts that are earmarked for some DoD departments was largely avoided in the reorganization, except at the SES level, which I will address below.

b. Other Transaction Authority (OTA) and Section 804 procurements are getting a lot of attention, but they seem ripe for abuse. I had actually was a member of a panel regarding Acquisition Reform at the NDIA Training and Simulation Industry Symposium held this past June in Orlando. I thought the focus would be on the recommendations from the 809 panel but, instead, turned out to be on OTA and Section 804 acquisitions. What impressed me the most was that even companies that had participated in these types of contracting actions felt that they were unnecessarily loosely composed, which would eventually impede progress upon review and audit of the programs. The consensus in discussions with the audience and other panel members was that the FAR and DFARS already possessed sufficient flexibility if Contracting Officers were properly trained to know how to construct such a requirement and still stay between the lines, absent a serious operational need that cannot be met through normal acquisition methods. Furthermore, OTA SME knowledge is virtually non-existent. Needless to say, things like Nunn-McCurdy and new Congressional reporting requirements in the latest NDAA still need to be met.

c. The emphasis in the department, it was announced, would also shift to a focus on portfolio analysis, but–again–no one could speak to exactly what that means. PARCA and the program analysis personnel on the OSD staffs provide SecDef with information on the entire portfolio of major programs. That is why there is a DoD Central Repository for submission of program data. If the Department is looking to apply some of the principles in DoD that provide flexibility in identifying risks and tradeoffs across, then that would be most useful and a powerful tool in managing resources. We’ve seen efforts like Cost as an Independent Variable (CAIV) and other tradeoff methods come and go, it would be nice if the department would reward the identification of programmatic risk early and often in program go/no-go/tradeoff/early production decisions.

d. To manage over $7 trillion dollars of program PARCA’s expense is $4.5M. The OSD personnel made this point, I think, to emphasize the return on investment in their role regarding oversight, risk identification, and root cause analysis with an eye to efficiency in the management of DoD programs. This is like an insurance policy and a built-in DoD change agent. But from my outside reading, there was a move by Representative Mac Thornberry, who is Chairman of House Armed Services, to hollow out OSD by eliminating PARCA and much of the AT&L staffs. I had discussions with staffs for other Congressional members of the Armed Services Committee when this was going on, and the cause seemed to be that there is a lack of understanding to the extent that DoD has streamlined its acquisition business systems and how key PARCA, DCMA, and the analysis and assessment staffs are to the acquisition ecosystem, and how they foot and tie to the service PEOs and PMOs. Luckily for the taxpayer, it seems that Senate Armed Services members were aware of this and took the language out during markup.

Other OSD Business — Reconciling FAR/DFARS, and Agile

a.. DoD is reconciling differences between overlapping FAR and DFARS clauses. Given that DoD is more detailed and specific in identifying reporting and specifying oversight of contracts by dollar threshold, complexity, risk, and contract type, it will be interesting how this plays out over time. The example given by Mr. John McGregor of OSD was the difference between the FAR and DFARS clauses regarding the application of earned value management (EVM). The FAR clause is more expansive and cut-and-dried. The DFARS clause distinguishes the level of EVM reporting and oversight (and surveillance) that should apply based on more specific criteria regarding the nature of the program and the contract characteristics.

b. The issue of Agile and how it somehow excuses using estimating, earned value management, risk management, and other proven program management controls was addressed. This contention is, of course, poppycock and Glen Alleman on his blog has written extensively about this zombie idea. The 809 Panel seemed to have been bitten by it, though, where its members were convinced that Agile is a program or project management method, and that there is a dichotomy between Agile and the use of EVM. The prescient point in critiquing this assertion was effectively made by the OSD speakers. They noted that they attend many forums and speak to various groups about Agile, and that there is virtually no consensus about what exactly it is and what characteristics define it, but everyone pretty much recognizes it as an approach to software development. Furthermore, EVM is used today on programs that at least partially use Agile software development methodology and do so very effectively. It’s not like crossing the streams.

Gary Bliss, PARCA – Fair Winds and Following Seas

The blockbuster announcement at the meeting was the planned retirement of Gary Bliss, who has been and is the Director of PARCA, on 30 September 2018. This was due to the cut in billets at the Senior Executive Service (SES) level. He will be missed.

Mr. Bliss has transformed the way that DoD does business and he has done so by building bridges. I have been attending NDIA IPMD meetings (and under its old PMSC name) for more than 20 years. Over that time, from when I was near the end of my uniformed career in attending the government/joint session, and, later, when I attended full sessions after joining private industry, I have witnessed a change for the better. Mr. Bliss leaves behind an industry that has established collaboration with DoD and federal program management personnel as its legacy for now and into the future.

Before the formation of PARCA all too often there were two camps in the organization, which translated to a similar condition in the field in relation to PMOs and oversight agencies, despite the fact that everyone was on the same team in terms of serving the national defense. The issue, of course, as it always is, was money.

These two camps would sometimes break out in open disagreement and expressed disparagement of the other. Mr. Bliss brought in a gentleman by the name of Gordon Kranz and together they opened a dialogue in meeting PARCA’s mission. This dialogue has continued with Mr. Kranz’s replacement, John McGregor.

The dialogue has revolved around finding root causes for long delays between development and production in program management and to recommend ways of streamlining processes and eliminating impediments, to root out redundancy, inefficiency, and waste throughout the program and project management supply chain, and to communicate with industry so that they understand the reasons for particular DoD policies and procedures, to obtain feedback on the effects of those decisions and how they can implemented to avoid arbitrariness, and to provide certainty to those who would seek to provide supplies and services to the national defense–especially innovative ones–in defining the rules of engagement. The focus was on collaborative process improvement–and it has worked. Petty disputes occasionally still arise, but they are the exception to the rule.

Under his watch Mr. Bliss established a common trusted data stream for program management data, and forged policies that drove process improvement from the industrial base through the DoD program office. This was not an easy job. His background as an economist and his long distinguished career in the public service armed him well in this regard. We owe him a debt of gratitude.

We can all hope that the next OSD leadership that assumes that role will be as effective and forward leaning.

Final Thoughts on DCMA report revelations

The interest I received on my last post on the DCMA internal report regarding the IWMS project was very broad, but the comments that I received expressed some confusion on what I took away as the lessons learned in my closing paragraphs. The reason for this was the leaked nature of the reports, which alleged breaches of federal statute and other administrative and professional breaches, some of a reputational nature. They are not the final word and for anyone to draw final conclusions from leaked material of that sort would be premature. But here are some initial lessons learned:

Lesson #1: Do not split requirements and game the system to fall below financial thresholds to avoid oversight and management approval. This is a Contracts 101 issue and everyone should be aware of it.

Lesson #2: Ensure checks and balances in the procurement process are established and maintained. Too much power, under the moniker of acquisition reform and “flexibility”, has given CIOs and PMs the authority to make decisions that require collaboration, checks, and internal oversight. In normative public sector acquisition environments the end-user does not get to select the contractor, the contract type, the funding sources, or the acquisition method involving fair and open competition–or a deviation from it. Nor having directed the procurement, to allow the same individual(s) to certify receipt and acceptance. Establishing checks and balances without undermining operational effectiveness requires a subtle hand, in which different specialists working within a matrix organization, with differing chains of command and responsibility, ensure that there is integrity in the process. All members of this team can participate in planning and collaboration for the organizations’ needs. It appears, though not completely proven, that some of these checks and balances did not exist. We do know from the inspections that Contracting Officer’s Representatives (CORs) and Contracting Officers’s Technical Representatives (COTRs) were not appointed for long-term contracts in many cases.

Lesson #3: Don’t pre-select a solution by a particular supplier. This is done by understanding the organization’s current and future needs and putting that expression in a set of salient characteristics, a performance work statement, or a statement of work. This document is authored to share with the marketplace though a formalized and documented process of discovery, such as a request for information (RFI).

Lesson #4: I am not certain if the reports indicate that a legal finding of the appropriate color of money is or is not a sufficient defense but they seem to. This can be a controversial topic within an organization and oftentimes yields differing opinions. Sometimes the situation can be corrected with the substitution of proper money for that fiscal year by higher authority. Some other examples of Anti-deficiency Act (ADA) violations can be found via this link, published by the Defense Comptroller. I’ve indicated from my own experience how, going from one activity to another as an uniformed Navy Officer, I had run into Comptrollers with different opinions of the appropriate color of money for particular types of supplies and services at the same financial thresholds. They can’t all have been correct. I guess I am fortunate that over 23 years–18 of them as a commissioned Supply Corps Officer* and five before that as an enlisted man–that I never ran into a ADA violation in any transaction in which I was involved. The organizations I was assigned to had checks and balances to ensure there was not a statutory violation which, I may add, is a federal crime. Thus, no one should be cavalierly making this assertion as if it were simply an administrative issue. But everyone in the chain is not responsible, unless misconduct or criminal behavior across that chain contributed to the violation. I don’t see it in these reports.Systemic causes require systemic solutions and education.

Note that all of these lessons learned are taught as basic required knowledge in acquisition classes and in regulation. I also note that, in the reports, there are facts of mitigation. It will be interesting to see what eventually comes out of this.

Back to School Daze Blogging–DCMA Investigation on POGO, DDSTOP, $600 Ashtrays,and Epistemic Sunk Costs

Family summer visits and trips are in the rear view–as well as the simultaneous demands of balancing the responsibilities of a, you know, day job–and so it is time to take up blogging once again.

I will return to my running topic of Integrated Program and Project Management in short order, but a topic of more immediate interest concerns the article that appeared on the website for pogo.org last week entitled “Pentagon’s Contracting Gurus Mismanaged Their Own Contracts.” Such provocative headlines are part and parcel of organizations like POGO, which have an agenda that seems to cross the line between reasonable concern and unhinged outrage with a tinge conspiracy mongering. But the content of the article itself is accurate and well written, if also somewhat ripe with overstatement, so I think it useful to unpack what it says and what it means.

POGO and Its Sources

The source of the article comes from three sources regarding an internal Defense Contract Management Agency (DCMA) IT project known as the Integrated Workflow Management System (IWMS). These consist of a September 2017 preliminary investigative report, an April 2018 internal memo, and a draft of the final report.

POGO begins the article by stating that DCMA administers over $5 trillion in contracts for the Department of Defense. The article erroneously asserts that it also negotiates these contracts, apparently not understanding the process of contract oversight and administration. The cost of IWMS was apparently $46.6M and the investigation into the management and administration of the program was initiated by the then-Commander of DCMA, Lieutenant General Wendy Masiello, shortly before she retired from the government in May 2017.

The implication here, given the headline, seems to be that if there is a problem in internal management within the agency, then that would translate into questioning its administration of the $5 trillion in contract value. I view it differently, given that I understand that there are separate lines of responsibility in the agency that do not overlap, particularly in IT. Of the $46.6M there is a question of whether $17M in value was properly funded. More on this below, but note that, to put things in perspective, $46.6M is .000932% of DCMA’s oversight responsibility. This is aside from the fact that the comparison is not quite correct, given that the CIO had his own budget, which was somewhat smaller and unrelated to the $5 trillion figure. But I think it important to note that POGO’s headline and the introduction of figures, while sounding authoritative, are irrelevant to the findings of the internal investigation and draft report. This is a scare story using scare numbers, particularly given the lack of context. I had some direct experience in my military career with issues inspired by the POGO’s founders’ agenda that I will cover below.

In addition to the internal investigation on IWMS, there was also an inspector general (IG) investigation of thirteen IT services contracts that resulted in what can only be described as pedestrian procedural discrepancies that are easily correctable, despite the typically overblown language found in most IG reports. Thus, I will concentrate on this post on the more serious findings of the internal investigation.

My Own Experience with DCMA

A note at this point on full disclosure: I have done business with and continue to do business with DCMA, both as a paid supplier of software solutions, and have interacted with DCMA personnel at publicly attended professional forums and workshops. I have no direct connection, as far as I am aware, to the IWMS program, though given that the assessment is to the IT organization, it is possible that there was an indirect relationship. I have met Lieutenant General Masiello and dealt with some of her subordinates not only during her time at DCMA, but also in some of her previous assignments in Air Force. I always found her to be an honest and diligent officer and respect her judgment. Her distinguished career speaks for itself. I have talked on the telephone to some of the individuals mentioned in the article on unrelated matters, and was aware of their oversight of some of my own efforts. My familiarity with all of them was both businesslike and brief.

As a supplier to DCMA my own contracts and the personnel that administer them were, from time-to-time, affected by the fallout from what I now know to have occurred. Rumors have swirled in our industry regarding the alleged mismanagement of an IT program in DCMA, but until the POGO article, the reasons for things such as a temporary freeze and review of existing IT programs and other actions were viewed as part and parcel of managing a large organization. I guess the explanation is now clear.

The Findings of the Investigation

The issue at hand is largely surrounding the method of source selection, which may have constituted a conflict of interest, and the type of money that was used to fund the program. In reading the report I was reminded of what Glen Alleman recently wrote in his blog entitled “DDSTOP: The Saga Continues.” The acronym DDSTOP means: Don’t Do Stupid Things On Purpose.

There is actually an economic behavioral principle for DDSTOP that explains why people make and double down on bad decisions and irrational beliefs. It is called epistemic sunk cost. It is what causes people to double down in gambling (to the great benefit of the house), to persist in mistaken beliefs, and, as stated in the link above, to “persist with the option which they have already invested in and resist changing to another option that might be more suitable regarding the future requirements of the situation.” The findings seem to document a situation that fits this last description.

In going over the findings of the report, it appears that IWMS’s program violated the following:

a. Contractual efforts in the program that were appropriate for the use of Research, Development, Test and Evaluation (R,D,T & E) funds as opposed to those appropriate for O&M (Operations and Maintenance) funds. What the U.S. Department of Defense calls “color of money.”

b. Amounts that were expended on contract that exceeded the authorized funding documents, which is largely based on the findings regarding the appropriate color of money. This would constitute a serious violation known as an Anti-Deficiency Act violation which, in layman’s terms, is directed to punish public employees for the misappropriation of government funds.

c. Expended amounts of O&M that exceeded the authorized levels.

d. Poor or non-existent program management and cost performance management.

e. Inappropriate contracting vehicles that, taken together, sidestepped more stringent oversight, aside from the award of a software solutions contract to the same company that defined the agency’s requirements.

Some of these are procedural and some are serious, particularly the Anti-deficiency Act (ADA) violations, are serious. In the Contracting Officer’s rulebook, you can withstand pedestrian procedural and administrative findings that are part and parcel of running an intensive contracting organization that acquires a multitude of supplies and services under deadline. But an ADA violation is the deadly one, since it is a violation of statute.

As a result of these findings, the recommendation is for DCMA to lose acquisition authority over the DoD micro-contracting level ($10,000). Organizationally and procedurally, this is a significant and mission-disruptive recommendation.

The Role and Importance of DCMA

DCMA performs an important role in contract compliance and oversight to ensure that public monies are spent properly and for the intended purpose. They perform this role mostly on contracts that are negotiated and entered into by other agencies and the military services within the Department of Defense, where they are assigned contract administration duties. Thus, the fact that DCMA’s internal IT acquisition systems and procedures were problematic is embarrassing.

But some perspective is necessary because there is a drive by some more extreme elements in Congress and elsewhere that would like to see the elimination of the agency. I believe that this would be a grave mistake. As John F. Kennedy is quoted as having said: “You don’t tear your fences down unless you know why they were put up.”

For those of you who were not around prior to the formation of DCMA or its predecessor organization, the Defense Contract Management Command (DCMC), it is important to note that the formation of the agency is a result of acquisition reform. Prior to 1989 the contract administration services (CAS) capabilities of the military services and various DoD offices varied greatly in capability, experience, and oversight effectiveness.Some of these duties had been assigned to what is now the Defense Logistics Agency (DLA), but major acquisition contracts remained with the Services.

For example, when I was on active duty as a young Navy Supply Corps Officer as part of the first class that was to be the Navy Acquisition Corps, I was taught cradle-to-grave contracting. That is, I learned to perform customer requirements development, economic analysis, contract planning, development of a negotiating position, contract negotiation, and contract administration–soup to nuts. The expense involved in developing and maintaining the skill set required of personnel to maintain such a broad-based expertise is unsustainable. For analogy, it is as if every member of a baseball club must be able to play all nine positions at the same level of expertise; it is impossible.

Furthermore, for contract administration a defense contractor would have contractual obligations for oversight in San Diego, where I was stationed, that were different from contracts awarded in Long Beach or Norfolk or any of the other locations where a contracting office was located. Furthermore, the military services, having their own organizational cultures, provided additional variations that created a plethora of unique requirements that added cost, duplication, inconsistency, and inter-organizational conflict.

This assertion is more than anecdotal. A series of studies were commissioned in the 1980s (the findings of which were subsequently affirmed) to eliminate duplication and inconsistency in the administration of contracts, particularly major acquisition programs. Thus, DCMC was first established under DLA and subsequently became its own agency. Having inherited many of the contracting field office, the agency has struggled to consolidate operations so that CAS is administered in a consistent manner across contracts. Because contract negotiation and program management still resides in the military services, there is a natural point of conflict between the services and the agency.

In my view, this conflict is a healthy one, as all power in the hands of a single individual, such as a program manager, would lead to more fraud, waste, and abuse, not less. Internal checks and balances are necessary in proper public administration, where some efficiency is sacrificed to accountability. It is not just the goal of government to “make the trains run on time”, but to perform oversight of the public’s money so that there is accountability in its expenditure, and integrity in systems and procedures. In the case of CAS, it is to ensure that what is being procured actually gets delivered in conformance to the contract terms and conditions designed to reduce the inherent risk in complex acquisition programs.

In order to do its job effectively, DCMA requires innovative digital systems to allow it to perform its CAS function. As a result, the agency must also possess an acquisition capability. Given the size of the task at hand in performing CAS on over $5 trillion of contract effort, the data involved is quite large, and the number of personnel geographically distributed. The inevitable comparisons to private industry will arise, but few companies in the world have to perform this level of oversight on such a large economic scale, which includes contracts comprising every major supplier to the U.S. Department of Defense, involving detailed knowledge of the management control systems of those companies that receive the taxpayer’s money. Thus, this is a uniquely difficult job. When one understands that in private industry the standard failure rate of IT projects is more than 70% percent, then one cannot help but be unimpressed by these findings, given the challenge.

Assessing the Findings and Recommendations

There is a reason why internal oversight documents of this sort stay confidential–it is because these are preliminary/draft findings and there are two sides to every story which may lead to revisions. In addition, reading these findings without the appropriate supporting documentation can lead one to the wrong impression and conclusions. But it is important to note that this was an internally generated investigation. The checks and balances of management oversight that should occur, did occur. But let’s take a close look at what the reports indicate so that we can draw some lessons. I also need to mention here that POGO’s conflation of the specific issues in this program as a “poster child” for cost overruns and schedule slippage displays a vast ignorance of DoD procurement systems on the part of the article’s author.

Money, Money, Money

The core issue in the findings revolves around the proper color of money, which seems to hinge on the definition of Commercial-Off-The-Shelf (COTS) software and the effort that was expended using the two main types of money that apply to the core contract: RDT&E and O&M.

Let’s take the last point first. It appears that the IWMS effort consisted of a combination of COTS and custom software. This would require acquisition, software familiarization, and development work. It appears that the CIO was essentially running a proof-of-concept to see what would work, and then incrementally transitioned to developing the solution.

What is interesting is that there is currently an initiative in the Department of Defense to do exactly what the DCMA CIO did as part of his own initiative in introducing a new technological approach to create IWMS. It is called Other Transactional Authority (OTA). The concept didn’t exist and was not authorized until the 2016 NDAA and is given specific statutory authority under 10 U.S.C. 2371b. This doesn’t excuse the actions that led to the findings, but it is interesting that the CIO, in taking an incremental approach to finding a solution, also did exactly what was recommended in the 2016 GAO report that POGO references in their article.

Furthermore, as a career Navy Supply Corps Officer, I have often gotten into esoteric discussions in contracts regarding the proper color of money. Despite the assertion of the investigation, there is a lot of room for interpretation in the DoD guidance, not to mention a stark contrast in interpreting the proper role of RDT&E and O&M in the procurement of business software solutions.

When I was on the NAVAIR staff and at OSD I ran into the difference in military service culture where what Air Force financial managers often specified for RDT&E would never be approved by Navy financial managers where, in the latter case, they specified that only O&M dollars applied, despite whether development took place. Given that there was an Air Force flavor to the internal investigation, I would be interested to know whether the opinion of the investigators in making an ADA determination would withstand objective scrutiny among a panel of government comptrollers.

I am certain that, given the differing mix of military and civil service cultures at DCMA–and the mixed colors of money that applied to the effort–that the legal review that was sought to resolve the issue. One of the principles of law is that when you rely upon legal advice to take an action that you have a defense, unless your state of mind and the corollary actions that you took indicates that you manipulated the system to obtain a result that shows that you intended to violate the law. I just do not see that here, based on what has been presented in the materials.

It is very well possible that an inadvertent ADA violation occurred by default because of an improper interpretation of the use of the monies involved. This does not rise to the level of a scandal. But going back to the confusion that I have faced from my own experiences on active duty, I certainly hope that this investigation is not used as a precedent to review all contracts under the approach of accepting a post-hoc alternative interpretation by another individual who just happens to be an inspector long after a reasonable legal determination was made, regardless of how erroneous the new expert finds the opinion. This is not an argument against accountability, but absent corruption or criminal intent, a legal finding is a valid defense and should stand as the final determination for that case.

In addition, this interpretation of RDT&E vs. O&M relies upon an interpretation of COTS. I daresay that even those who throw that term around and who are familiar with the FAR fully understand what constitutes COTS when the line between adaptability and point solutions is being blurred by new technology.

Where the criticism is very much warranted are those areas where the budget authority would have been exceeded in any event–and it is here that the ADA determination is most damning. It is one thing to disagree on the color of money that applies to different contract line items, but it is another to completely lack financial control.

Part of the reason for lack of financial control was the absence of good contracting practices and the imposition of program management.

Contracts 101

While I note that the CIO took an incremental approach to IWMS–what a prudent manager would seem to do–what was lacking was a cohesive vision and a well-informed culture of compliance to acquisition policy that would avoid even the appearance of impropriety and favoritism. Under the OTA authority that I reference above as a new aspect of acquisition reform, the successful implementation of a proof-of-concept does not guarantee the incumbent provider continued business–salient characteristics for the solution are publicized and the opportunity advertised under free and open competition.

After all, everyone has their favorite applications and, even inadvertently, an individual can act improperly because of selection bias. The procurement procedures are established to prevent abuse and favoritism. As a solution provider I have fumed quite often where a selection was made without competition based on market surveys or use of a non-mandatory GSA contract, which usually turn out to be a smokescreen for pre-selection.

There are two areas of fault on IMWS from the perspective of acquisition practice, and another in relation to program management.

These are the initial selection of Apprio, which had laid out the initial requirements and subsequently failed to have the required integration functionality, and then, the selection of Discover Technologies under a non-mandatory GSA Blanket Purchase Agreement (BPA) contract under a sole source action. Furthermore, the contract type was not appropriate to the task at hand, and the arbitrary selection of Discover precluded the agency finding a better solution more fit to its needs.

The use of the GSA BPA allowed managers, however, to essentially spit the requirements to stay below more stringent management guidelines–an obvious violation of acquisition regulation that will get you removed from your position. This leads us to what I think is the root cause of all of these clearly avoidable errors in judgment.

Program Management 101

Personnel in the agency familiar with the requirements to replace the aging procurement management system understood from the outset that the total cost would probably fall somewhere between $20M and $40M. Yet all effort was made to reduce the risk by splitting requirements and failing to apply a programmatic approach to a clearly complex undertaking.

This would have required the agency to take the steps to establish an acquisition strategy, open the requirement based on a clear performance work statement to free and open competition, and then to establish a program management office to manage the effort and to allow oversight of progress and assessment of risks in a formalized environment.

The establishment of a program management organization would have prevented the lack of financial control, and would have put in place sufficient oversight by senior management to ensure progress and achievement of organizational goals. In a word, a good deal of the decision-making was based on doing stupid things on purpose.

The Recommendations

In reviewing the recommendations of the internal investigation, I think my own personal involvement in a very similar issue from 1985 will establish a baseline for comparison.

As I indicated earlier, in the early 1980s, as a young Navy commissioned officer, I was part of the first class of what was to be the Navy Acquisition Corps, stationed at the Supply Center in San Diego, California. I had served as a contracting intern and, after extensive education through the University of Virginia Darden School of Business, the extended Federal Acquisition Regulation (FAR) courses that were given at the time at Fort Lee, Virginia, and coursework provided by other federal acquisition organizations and colleges, I attained my warrant as a contracting officer. I also worked on acquisition reform issues, some of which were eventually adopted by the Navy and DoD.

During this time NAS Miramar was the home of Top Gun. In 1984 Congressman Duncan Hunter (the elder not the currently indicted junior of the same name, though from the same San Diego district), inspired by news of $7,600 coffee maker and a $435 hammer publicized by the founders of POGO, was given documents by a disgruntled employee at the base regarding the acquisition of replacement E-2C ashtrays that had a cost of $300. He presented them to the Base Commander, which launched an investigation.

I served on the JAG investigation under the authority of the Wing Commander regarding the acquisitions and then, upon the firing of virtually the entire chain of command at NAS Miramar, which included the Wing Commander himself, became the Officer-in-Charge of Supply Center San Diego Detachment NAS Miramar. Under Navy Secretary Lehman’s direction I was charged with determining the root cause of the acquisition abuses and given 60-90 days to take immediate corrective action and clear all possible discrepancies.

I am not certain who initiated the firings of the chain of command. From talking with contemporaneous senior personnel at the time it appeared to have been instigated in a fit of pique by the sometimes volcanic Secretary of Defense Caspar Weinberger. While I am sure that Secretary Weinberger experienced some emotional release through that action, placed in perspective, his blanket firing of the chain of command, in my opinion, was poorly advised and counterproductive. It was also grossly unfair, given what my team and I found as the root cause.

First of all, the ashtray was misrepresented in the press as a $600 ashtray because during the JAG I had sent a sample ashtray to the Navy industrial activity at North Island with a request to tell me what the fabrication of one ashtray would cost and to provide the industrial production curve that would reduce the unit price to a reasonable level. The figure of $600 was to fabricate one. A “whistleblower” at North Island took this slice of information out of context and leaked it to the press. So the $300 ashtray, which was bad enough, became the $600 ashtray.

Second, the disgruntled employee who gave the files to Congressman Hunter had been laterally assigned out of her position as a contracting officer by the Supply Officer because of the very reason that the pricing of the ashtray was not reasonable, among other unsatisfactory performance measures that indicated that she was not fit to perform those duties.

Third, there was a systemic issue in the acquisition of odd parts. For some reason there was an ashtray in the cockpit of the E-2C. These aircraft were able to stay in the air an extended period of time. A pilot had actually decided to light up during a local mission and, his attention diverted, lost control of the aircraft and crashed. Secretary Lehman ordered corrective action. The corrective action taken by the squadron at NAS Miramar was to remove the ashtray from the cockpit and store them in a hangar locker.

Four, there was an issue of fraud. During inspection the spare ashtrays were removed and deposited in the scrap metal dumpster on base. The tech rep for the DoD supplier on base retrieved the ashtrays and sold them back to the government for the price to fabricate one, given that the supply system had not experienced enough demand to keep them in stock.

Fifth, back to the systemic issue. When an aircraft is to be readied for deployment there can be no holes representing missing items in the cockpit. A deploying aircraft with this condition is then grounded and a high priority “casuality report” or CASREP is generated. The CASREP was referred to purchasing which then paid $300 for each ashtray. The contracting officer, however, feeling under pressure by the high priority requisition, did not do due diligence in questioning the supplier on the cost of the ashtray. In addition, given that several aircraft deploy, there were a number of these requisitions that should have led the contracting officer to look into the matter more closely to determine price reasonableness.

Furthermore, I found that buying personnel were not properly trained, that systems and procedures were not established or enforced, that the knowledge of the FAR was spotty, and that procurements did not go through multiple stages of review to ensure compliance with acquisition law, proper documentation, and administrative procedure.

Note that in the end this “scandal” was born by a combination of systemic issues, poor decision-making, lack of training, employee discontent, and incompetence.

I successfully corrected the issues at NAS Miramar during the prescribed time set by the Secretary of the Navy, worked with the media to instill public confidence in the system, built up morale, established better customer service, reduced procurement acquisition lead times (PALT), recommended necessary disciplinary action where it seemed appropriate, particularly in relation to the problematic employee, recovered monies from the supplier, referred the fraud issues to Navy legal, and turned over duties to a new chain of command.

NAS Miramar procurement continued to do its necessary job and is still there.

What the higher chain of command did not do was to take away the procurement authority of NAS Miramar. It did not eliminate or reduce the organization. It did not close NAS Miramar.

It requires leadership and focus to take effective corrective action to not only fix a broken system, but to make it better while the corrective actions are being taken. As I outlined above, DCMA performs an essential mission. As it transitions to a data-driven approach and works to reduce redundancy and inefficiency in its systems, it will require more powerful technologies to support its CAS function, and the ability to acquire those technologies to support that function.

Take Me To The River, Part 2, Schedule Elements–A Digital Inventory of Integrated Program Management Elements

Recent attendance at various forums to speak has interrupted the flow of this series on IPM elements. Among these venues I was engaged in discussions regarding this topic, as well as the effects of acquisition reform on the IT, program, and project management communities in the DoD and A&D marketplace.

For this post I will restrict the topic to what are often called schedule elements, though that is a nebulous term. Also, one should not draw a conclusion that because I am dealing with this topic following cost elements, that it is somehow inferior in importance to those elements. On the contrary, planning and scheduling are integral to applying resources and costs, in tracking cost performance, and in our systemic analysis its activities, artifacts, and elements are antecedent to cost element considerations.

The Relative Position of Schedule

But the takeaway here is this: under no circumstances should any program or project manager believe that cost and schedule systems represent a dichotomy, nor a hierarchy, of disciplines. They are interdependent and the behavior noted in one will be manifested in the other.

This is important to keep in mind, because the software industry, more than any other, has been responsible for reinforcing and solidifying this (erroneous) perspective. During the first generation of desktop application development, software solutions were built to automate the functions of traditional line and staff functions. This made a great deal of sense.

From a sales and revenue perspective, it is easier to sell a limited niche software “tool” to an established customer base that will ensure both quick acceptance and immediate realization of productivity and labor savings. The connection from the purchase to ROI was easily traceable in the time span and at the level of the person performing their workaday tasks.

Thus, solutions were built to satisfy the needs of cost analysts, schedule analysts, systems engineers, cost estimators, and others. Where specific solutions left gaps, such spreadsheet solutions such as Microsoft Excel were employed to fill them. It was in no one’s interest to go beyond their core competency. Once a dominant or set of dominant incumbents (a monoposony) inhabited a niche, they employed the usual strategies for “stickiness” to defend territory and raise barriers to new entries.

What was not anticipated by many organizations was the fact that once you automate a function that the nature of the system, if one is to implement the most effective organizational structure, is transformed to conform to the most efficient flow and use of data–and its resulting transformation into information and intelligence. Oftentimes the skill set to use the intelligence does not exist because the resulting insights and synergy involved in taking larger and more comprehensive datasets which themselves are more credible and accurate was not anticipated in adjusting the organizational structure.

This is changing and must change, because the old way of using limited sets of data in the age of big(ger) data that provide a more comprehensive view of business conditions is not tenable. At least, not if a company or organization wants to stay relevant or profitable.

Characteristics and Basic Elements of the Project Schedule

If one were to perform a Google search of project schedule while reading this post, you would find a number of definitions, some of which overlap. For example, the PMBOK defines a schedule as, quite simply, “the planned dates for performing activities and the planned dates for meeting milestones.”

Thus our elements include planned dates, activities, and milestones. But is that all? Under this definition, any kind of plan, from a minor household renovation or upgrade to building an aircraft carrier would contain only these elements.

I don’t think so.

For complex projects and programs, which is the focus on this blog, our definition of a project schedule is a bit more comprehensive. If you go to the aforementioned A Guide for DoD Program Managers mentioned in my last post, you will find even less specificity.

The reason for this is that what we define as a project schedule is part and parcel of the planning phase of a project, which is then further specified in the specific time-phased planning elements for execution of the project through its lifespan into production. It is the schedule that ties together all of the disciplines in putting together a project–acquisition, systems engineering, cost estimating, and project performance management.

In attending scheduled-focused conferences over the years and in talking to program management colleagues is the refrain that:

a. It is hard to find a good scheduler, and

b. Constructing a schedule is more of an art than a science.

I can only say that this cedes the field to a small cadre of personnel who perform an essential function, but who do so with few objective tests of effectiveness or accountability–until it is too late.

But the reality is quite different from the fuzzy perception of schedule that is often assumed. All critical path method (CPM) schedules describe the same phenomena, though the lexicon will vary based on the specific proprietary application employed.

In government-focused and large commercial projects, the schedule is heart of planning and execution. In the DoD world it is known as the Integrated Master Schedule (IMS), which utilize the inherent bottom-up relationships of elements to determine the critical path. The main sources regarding the IMS have a great deal of overlap, but tend to be either aspirational (and unfortunately not prescriptive in defining the basic characteristics of an IMS) or reflect the “art over science” approach. For those following along these are the DoD Integrated Master Plan and Integrated Master Schedule Preparation and Use Guide of 21 October 2005, the NAVAIR Integrated Master Schedule (IMS) Guidebook of February 2010, and the NDIA Planning and Scheduling Excellence Guide (PASEG) of 9 March 2016 (unfortunately no current direct link).

The key elements that comprise an IMS, in addition to what we identified under the PMBOK are that it is networked schedule consisting of specific durations that are assigned to specific work tasks that must be accomplished in discrete work packages. In most cases these durations will be derived by some kind of either fixed, manual method or through the inherent optimization algorithm being applied by the CPM application. More on this below. But these work packages are discrete, meaning that they represent the full scope of the work that must be accomplished to during the specified duration for the creation of an end product. Discrete work is distinguished from level of effort (LOE) work, the latter being effort that is always expended, such as administrative and management tasks, that are not directly tied to the accomplishment of an end product.

These work packages are tied together to illustrate antecedent and progressive work that show predecessor and successor relationships. Long term planning activities, which cannot be fleshed out until more immediate work is completed are set aside as placeholders called planning packages. Each of the elements that are tracked in the IMS are based on the presentation of established criteria that define completion, events, and specific accomplishments.

The most comprehensive IMSs consist of detailed planning that include resources and elements of cost.

Detailed Elements of the IMS

Given these general elements, the best source of identifying the key elements of detailed schedules is also found in Department of Defense documents. The core document in this case is the Data Item Description for the IMS numbered as DI-MGMT-81650. The latest one is dated March 30, 2005. There are a minimum of 32 data elements, some of these already mentioned and which I will not repeat in this post since they are pretty well listed and identified in the source document.

For those not familiar with these documents, Data Item Descriptions (or DiDs–gotta love acronyms) represent the detailed technical documents for artifacts involved in the management of DoD-related operations. Thus, this provides us with a pretty good inventory of elements to source. But there are others that are implied.

For example, the 81650 DiD identifies an element known as “methodology.” What this means is that each scheduling application has an optimization engine where the true differences in schedule construction and intellectual property reside. Elements that affect these calculations are time-based, duration-based, float, and slack, and those related to resources.

These time-based elements consist of early start, early finish, late start, late finish. Duration-based elements consist of shortest time, longest time, greatest rank weight. An additional element related to schedule float identifies minimum slack. Resources are further delineated by the greatest work content and the greatest cumulative resource content.

I would note that the NDIA PASEG adds some sub-elements to this list that are based on the algorithmic result of the schedule engines and, thus, tends to ignore the antecedent salient elements of validating the optimization engine found above. These additional sub-elements are total float, free float, soft constraints, hard constraints, and–also found in the aforementioned DiD–program, task, and resource calendars.

Normally, this is where a survey would end–with schedule-specific data elements focused on the details of the schedule. But we’re going to challenge our assumptions a bit more.

Framing Assumptions of Schedules and Programs

The essential document that provides a definition of the term “framing assumption” was published by RAND Corporation in 2014 entitled Identifying Acquisition Framing Assumptions Through Structured Deliberation by Mark V. Arena and Lauren A. Mayer.  The definition of a framing assumption is “any explicit or implicit assumption that is central is shaping cost, schedule, or performance expectations.”

As I have explored in my prior post, the use of the term “cost” is a fuzzy one. To some it means earned value management, which measures a small part of the costs of development and ownership of a system. To others it means total cost of ownership. Schedule is an implicit part of this definition, and then we have performance expectations, which I will deal with in a separate post.

But we can apply the concept of framing assumptions in two ways.

The first applies to the assumed purpose of the schedule. What do we construct one? This goes back to my earlier statement that “…the schedule…ties together all of the disciplines in putting together a project–acquisition, systems engineering, cost estimating, and project performance management.”

For the NDIA PASEG the IMS is a “tool, not just a report” that “provides an ever-changing window into the progress (or lack of it) of current work effort. The strategic mission of the schedule is to point out future risks and opportunities.”

For the NAVAIR IMS Guide the IMS “At a top level…contain(ing) the networked, detailed tasks necessary to ensure successful program execution…” that “capture(s) project tasks and task relationships”, “show(s) the magnitude and how long each task will take”, “show(s) resources, durations, and constraints for each task” and “show(s) the critical path.”

For the DiD 81650 “The Integrated Master Schedule (IMS) is an integrated schedule containing the networked, detailed tasks necessary to ensure successful program execution.”

But the most comprehensive definition that goes to the core of the purpose of an IMS can be found in paragraph 1.2 of the DoD Integrated Master Plan and Integrated Master Schedule Preparation and Use Guide (IMP/IMS Guide). The elements of this purpose is worth transcribing, because if we have a requirement and cannot ask the “So What?” question, that is, if we cannot effectively determine why something must be done, then it probably does not need to be done (or we need to apply rigor in the development our expertise).

For what the IMP/IMS Guide does is clearly tie the schedule to the programmatic framing assumptions (used in the context in which RAND meant it) from initial acquisition through planning. Thus, the Integrated Master Plan (IMP) is firmly established as an antecedent and intermediate planning process (not merely an artifact or tool), that results in the program R&D execution process.

Taken in whole these processes and the resulting artifacts of the processes provide:

a. Provides offerors and acquiring activities with detailed execution planning, organization, and scheduling information that sets realistic expectations for the resulting contract action.

b. Serves as the execution plan for how the supplier will meet the contract’s performance requirements within cost and schedule constraints.

c. Provides a basis for integrating all of the functions involved in development and deployment of the system being acquired and, after award, sets the framing assumptions of the program.

d. Provides the basis for determining and assessing progress, identifying risks, determining the basis for contractual award fees and penalties, assess progress on Key Performance Parameters (KPPs) and Technical Performance Measures (TPMs), determine alternative paths to project completion, and determine opportunities for innovation and new acquisitions not apparent at the time of the award.

What all of this means is that the Integrated Master Schedule is too important to be left to the master scheduler. Yes, the schedule is a “tool” to those at the most basic tactical level in work execution. Yes, it is also an artifact and record.

But, more importantly, it is the comprehensive notional representation of the project’s or program’s scope, effort, progress, and assessment.

Private and Government-focused Industry Practice

A word has to mentioned here about the difference in practice between purely private industry practice in managing large projects and programs, and the skewing in the posts that focus on those industries that focus on public sector acquisition.

In the listing of schedule elements listed earlier there is reference to resources and elements of cost, yet here is an area that standard practice diverges. In private industry the application of resource assignments to specific work is standard practice and found in the IMS.

In companies focused on the public sector and DoD, the practice is to establish a different set of data outside of the schedule to manage resources. Needless to say this creates problems of validation of data across disparate systems related to the lowest level of planning and execution of a project or program. The basis for it, I think, relates to viewing the schedule as a “tool” and not the basis for project execution. This “tool” mindset also allows for separate “earned value engines” that oftentimes do not synchronize with the execution of the schedule, not only undermining the practical value of both, but also creating systems complexity and inefficiency where none need exist.

Another gap found in many areas of public acquisition concerns the development of an integrated master plan antecedent to the integrated master schedule. The cause here, once again, I believe is viewing the discipline of systems engineering separate; one that is somehow walled off from the continuing assessment of program execution, though that assumption is not supported by program phasing and milestone planning and achievement.

From the perspective of Integrated Program/Project Management, these considerations cannot be ignored, and so our inventory of essential data elements must include elements from these practices.

But Wait! There’s More!

Most discussions at conferences and professional meetings will usually stop at this point–viewing cost and schedule integration as the essence of IPM–with “cost’ limited to EVM. Some will add some “oh by the ways” such as technical performance and risk. I will address these in the next post as well.

But there are also other systems and processes that are relevant to our inventory. But what I have covered thus far in this series should challenge you if you have been paying attention.

I tackled cost first because of the assumptions implicit in equating it with EVM, and then went on to demonstrate that there are other elements of cost that provide a more comprehensive view. This is not denigrate the value of EVM, since it is an essential process in project management, but to demonstrate that its analytics are not comprehensive and, as with any complex system, require the contribution of additional information, depending on the level and type of work performance and progress being recorded and assessed.

In this post I have tacked the IMS, and have demonstrated that it is not supplementary process, but central to all other processes and actions being taken in the execution of the project or program. Many times people enter the schedule from an assessment of cost performance–tracing cost drivers to specific schedule activities and then tasks. But this has it backwards, based on the best technology available sometime in the late 1990s.

It is the schedule that brings together all relevant information from our execution and control processes and systems. It seems to me that perhaps the first place one goes is the schedule, that the first element to trace are those related to schedule slippage and unexpected resource consumption, and then to trace these to contract cost impact.

But, of course, there is more–and these other elements may turn out to be of greater consequence than just cost and schedule considerations. More on these in my next post.

In Closing: Battle Rhythm and the Plans of the Day and Week

When I was on active duty in the Navy we planned our days and weeks around a Plan of the Day or Plan of the Week. This is a posted agenda so that the entire ship or command understands the major events that affect its operations. It establishes focus on the main events at hand and fosters communication both laterally and vertically within the chain of command.

As one rises in rank and responsibility it is important to understand the operational tempo of the unit or ship, its systems, and subsystems. This is important in avoiding crisis management.This is known as Battle Rhythm.

Baked into the schedule (assuming proper construction and effective integrated product teaming) are the major events, milestones, and expected achievement of the program or project. Thus, there are events that should be planned around and anticipation of these items on a daily, weekly, biweekly, monthly, quarterly, and major milestone basis.

Given an effective battle rhythm, a PM should never complain about performance and progress indicators “looking into the rear view mirror”. If that is the case then perhaps the PM should look at the effectiveness and timeliness of the underlying project and program systems.Thus, when a PMO complains of information and intelligence being too late to be actionable, it is actually describing a condition of ineffective, latent, and disjointed information and intelligence systems.

Thus, our next step in our next post is to identify more salient IPM elements that cut to the heart of the matter.

Take Me to the River, Part 1, Cost Elements – A Digital Inventory of Integrated Program Management Elements

In a previous post I recommended a venue focused on program managers to define what constitutes integrated program management. Since that time I have been engaged with thought leaders and influencers in both government and industry, many of whom came to a similar conclusion independently, agree in this proposition and who are working to bring it about.

My own interest in this discussion is from the perspective of maximization of the information ecosystem that underlies and describes the systems known as projects and programs. But what do I mean by this? This is more than a gratuitous question, because oftentimes the information essential to defining project and program performance and behavior are intermixed, and therefore diluted and obfuscated, by confusion with those of the overall enterprise.

Project vs. Program

What a mean by the term project in this context is an organization that is established around a defined effort of fixed duration (a defined beginning and projected end) that is specifically planned and organized for the development and deployment of a particular end item, state, or result, with an identified set of resources assigned and allocated to achieve its goals.

A program is defined as a set of interrelated projects and sub-projects which is also of fixed duration that is specifically planned and organized not only for the development and deployment, but also the continues this role through sustainment (including configuration control), of a particular end item, state, or result, with an identified set of resources assigned and allocated to achieve its goals. As such, the program management team also is the first level life-cycle manager of the end item, state, or result, and participates with other levels of the organization in these activities. (More on life-cycle costs below).

Note the difference in scope and perspective, though oftentimes we use these terms interchangeably.

For shorthand, a small project of short duration operates at the tactical level of planning. A larger project, which because of size, complexity, duration, and risk approaches the definition of a program, operates at the operational level, as do most programs. Larger and more complex programs that will affect the core framing assumptions of the enterprise align their goals to the strategic level of planning. Thus, there are differences in scale, complexity and, hence, data points that can be captured at these various levels.

Another aspect of the question of establishing an integrated digital project and program management environment is sufficiency of data, which relates directly to scale. Sufficiency in this regard is defined as whether there is enough data to establish a valid correlation and, hopefully, draw a causation. Micro-economic foundations–and models–often fail because of insufficient data. This is important to keep in mind as we inventory the type of data available to us and its significance. Oftentimes additional data points can make up for those cases where there is insufficiency in the depth and quality of a more limited set of data points. Doing so will also mitigate subjectivity, especially in smaller efforts.

Thus, in constructing a project or program, regardless of its level of planning, we often begin by monitoring the most basic elements. These are usually described as cost, schedule, performance, and risk, though I will discuss and identify other contributors that can be indexed.

This first post will concentrate on the first set of elements–those that constitute cost. In looking at these, however, we will find that the elements within this category are a bit broader than what is currently used in determining project and program performance.

Contract Costs

When we refer to costs in project and program management we oftentimes are referring to those direct and indirect costs expended by the supplier over the course of the effort, particular in Cost Plus contractual efforts. The breakout of cost from a data perspective places it in subcategories:

Note that these are costs within the contract itself, as a cohesive, self-identifying entity. But there are other costs associated with our contracts which feed into program and project management. These are necessary to identify and capture if we are to take an holistic approach to these disciplines.

The costs that are anticipated by the contract are based on cost estimates, which need to be funded. These funded costs will be allocated to particular lines in the contract (CLINs), whether these be supporting contract efforts or deliverables. Thus, additional elements of our digital inventory include these items but lead us to our next categories.

Cost Estimates, Colors of Money, and Cash Flow

Cost estimates are the basis for determining the entire contract effort, and eventually make it into the project and program cost plan. Once cost estimates are applied and progress is tracked through the collection of actual costs, these elements are further traced to project and program activities, products, commodities, and other business categories, such as the indirect costs identified on the right hand side of the chart above.

Our cost plans need to be financed, as with any business entity. Though the most complex projects often are financed by some government entity because of their scale and impact, private industry–even among the largest companies–must obtain financing for the efforts at hand, whether these come from internal or external sources.

Thus two more elements present themselves: “colors” of money, that is, money that is provided for a specific purpose within the project and program cost plan which could also be made available for only some limited period of time, and the availability of that money sufficient to execute particular portions of the project or program, that is, cash flow.

The phase of the project or program will determine the type of money that is made available. These are also contained in the costs that are identified in the next section, but include, from a government financing perspective, Research, Development, Test and Evaluation (RDT&E) money, Procurement, Operations and Maintenance (O&M), and Military Construction (MILCON) dollars. By Congressional appropriation and authorization, each of these types of money may be provided for particular programs, and each type of authorization has a specific period in which they can be committed, obligated, and expended before they expire. The type of money provided also aligns with the phase of the project or program: whether it still be in development, production, deployment and acquisition, sustainment, or retirement.

These costs will be reflected in reporting that reflects actual and projected rates of expenditure, that will be tied to procurement, material management, and resource management systems.

Additional Relative Costs

As with all efforts, the supplier is not the only entity to incur costs on a development project or program. The customer also incurs costs, which must be taken into account in determining the total cost of the effort.

For anyone who has undergone any kind of major effort on their home, or even had to get things other workaday things done, like deciding when to change the tires on the car or when to get to the dentist implicitly understand that there is more effort in timing and determining the completion of these items than the cost of new kitchen cabinets, tires, or a filling. One must decide to take time off from work. One must look to their own cash flow to see if they have sufficient funds not only for the merchant, but for all of the sundry and associated tasks that must be done in preparation for and after the task’s completion. To choose to do one thing is to choose not to do another–an opportunity cost. Other people may be involved in the decision. Perhaps children are in the household and a babysitter is required. Perhaps the home life is so disrupted that another temporary abode is necessary on a short term basis.

All of these are costs that one must take into account, and at the individual level we do these calculations and plan these activities as a matter of fact.

In customer-supplier relationships the former incurs costs above the contract costs, which must be taken into account by the customer project or program executive. In the Department of Defense an associated element is called program management administration (PMA). For private entities this falls into allocated G&A and Overhead costs, aside from direct labor and material costs, but in all cases these are costs that have come about due to the decision to undertake the specific effort.

Other elements of cost on the customer side are contractually furnished facilities, property, material or equipment, and testing and evaluation costs.

Contract Cost Performance: Earned Value Management

I will further discuss EVM in more detail a later installment of this element inventory, but mention must be made of EVM since to exclude it is to be grossly remiss.

At core EVM is a financial measure of value against what has been physically achieved against a performance management baseline (PMB), which ties actual costs and completion of work through a work breakdown structure (WBS). It is focused on the contract level of performance, which in some cases may constitute the entire project, though not necessarily the entire effort for the program.

Linkages to the other cost elements I have delineated elsewhere in this post ranges from strong to non-existent. Thus, while an essential means of linking contractual achievement to work accomplishment that, at various levels of fidelity, is linked to actual technical achievement, it does not capture all of the costs in our data inventory.

An essential overview in understanding what it does capture is best summed up in the following diagram taken from the Defense Acquisition University (DAU) site:

Commercial EVM elements, while not necessarily using the same terminology or highly structured process, possess a similar structure in allocating costs and achievement against baseline costs in developmental efforts to work packages (oftentimes schedule tasks in resource-loaded schedules) under an integrated WBS structure with Management Reserve not included as part of the baseline.

Also note that commercial efforts often include their internal costs as part of the overall contractual effort in assessing earned value against actual work achievement, while government contracting efforts tend to exclude these inherent costs. That being said, it is not that there is no cost control in these elements, since strict ceilings often apply to PMA and other such costs, it is that contract cost performance does not take these costs, among others, into account.

Furthermore, the chart above provides us with additional sub-elements in our inventory that are essential in capturing data at the appropriate level of our project and program hierarchy.

Thus, for IPM, EVM is one of many elements that are part our digital inventory–and one that provides a linkage to other non-cost elements (WBS). But in no way should it be viewed as capturing all essential costs associated with a contractual effort, aside from the more expansive project or program effort.

Portfolio Management and Life-Cycle Costs

There is another level of management that is essential in thinking about project and program management, and that is the program executive level. In the U.S. military services these are called Program Executive Officers (PEOs). In private industry they are often product managers, CIOs, and other positions that often represent the link between the program management teams and the business operations side of the organization. Thus, this is also the level of management organized to oversee a number of individual projects and programs that are interrelated based on mission, commodity, or purpose. As such, this level of management often concentrates on issues across the portfolio of projects and programs.

The main purpose of the portfolio management level is to ensure that project and program efforts are aligned with the strategic goals of the organization, which includes an understanding of the total cost of ownership.

In performing this purpose one of the functions of portfolio management is to identify risks that may manifest within projects and programs, and to determine the most productive use of limited resources across them, since they are essentially competing for the same dollars. This includes cost estimates and re-allocations to address ontological, aleatory, and epistemic risk.

Furthermore, the portfolio level is also concerned with the life-cycle factors of the item under development, so that there is effective hand-off at the production and sustainment phases. The key here is to ensure that each project or program, which is focused on the more immediate goals of project and program execution, continues to meet the goals of the organization in terms of life-cycle costs, and its effectiveness in meeting the established goals essential to the project or program’s framing assumptions.

But here we are focusing on cost, and so the costs involved are trade-off costs and opportunities, assessments of return on investment, and the aforementioned total cost of ownership of the end item or system. The costs that contribute to the total cost of ownership include all of the development costs, external and internal program management costs, procurement costs, operations and support costs, maintenance and life extension costs, and system retirement costs.

Conclusion

I believe that the survey of cost elements presented in this initial post illustrates that present digital project and program management systems are limited and immature–capturing and evaluating only a small portion of the total amount of available data.

These gaps make it impossible, for example, to determine the relative significance any one element–and the analytics that can derived from it–over another; not to mention the inability to provide the linkage among these absent elements that would garner insights into cause-and-effect and predictive behavior so that we have enough time to influence the outcome.

It is also clear that, when we strive to define what constitutes integrated project and program management, that we must learn what is of most importance to the PM in performing those duties that are viewed as essential to success, and which are not yet captured in our analytical and predictive systems.

Only when our systems reach the level of cohesiveness and comprehensiveness in providing organizational insight and intelligence essential to project or program management will PMs ignore them at their own risk. In getting there we must first identify what can be captured from the activities that contribute to our efforts.

My next post will identify essential elements related to planning and scheduling.

 

Note: I am indebted to Defense Acquisition University’s resources in my research across many of my postings and link to them for the edification of the reader. For more insight into many of the points raised in this post I would recommend that readers familiarize themselves with A Guide for DoD Program Managers.

 

Don’t Stop Thinking About Tomorrow–Post-Workshop Blogging…and some Low Comedy

It’s been a while since I posted to my blog due to meetings and–well–day job, but some interesting things occurred during the latest Integrated Program Management (IPMD) of the National Defense Industrial Association (NDIA) meeting that I think are of interest. (You have to love acronyms to be part of this community).

Program Management and Integrated Program Management

First off is the initiative by the Program Management Working Group to gain greater participation by program managers with an eye to more clearly define what constitutes integrated program management. As readers of this blog know, this is a topic that I’ve recently written about.

The Systems Engineering discipline is holding their 21st Annual Systems Engineering Conference in Tampa this year from October 22nd to the 25th. IPMD will collaborate and will be giving a track dedicated to program management. The organizations have issued a call for papers and topics of interest. (Full disclosure: I volunteered this past week to participate as a member of the PM Working Group).

My interest in this topic is based on my belief from my years of wide-ranging experience in duties from having served as a warranted government contracting officer, program manager, business manager, CIO, staff officer, and logistics officer that there is much more to the equation in defining IPM that transcends doing so through the prism of any particular discipline. Furthermore, doing so will require collaboration and cooperation among a number of project management disciplines.

This is a big topic where, I believe, no one group or individual has all of the answers. I’m excited to see where this work goes.

Integrated Digital Environment

Another area of interest that I’ve written about in the past involved two different–but related–initiatives on the part of the Department of Defense to collect information from their suppliers that is necessary in their oversight role not only to ensure accountability of public expenditures, but also to assist in project cost and schedule control, risk management, and assist in cost estimation, particularly as it relates to risk sharing cost-type R&D contracted project efforts.

Two major staffs in the Offices of the Undersecretary of Defense have decided to go with a JSON-type schema for, on the one hand, cost estimating data, and on the other, integrated cost performance, schedule, and risk data. Each initiative seeks to replace the existing schemas in place.

Both have been wrapped around the axle on getting industry to move from form-based reporting and data sharing to a data-agnostic solution that meet the goals of reducing redundancy in data transmission, reducing the number of submissions and data streams, and moving toward one version of truth that allows for SMEs on both sides of the table to concentrate on data analysis and interpretation in jointly working toward the goal of successful project completion and end-item deployment.

As with the first item, I am not a disinterested individual in this topic. Back when I wore a uniform I helped to construct DoD policy to create an integrated digital environment. I’ve written about this experience previously in this blog, so I won’t bore with details, but the need for data sharing on cost-type efforts acknowledges the reality of the linkage between our defense economic and industrial base and the art of the possible in deploying defense-related end items. The same relationship exists for civilian federal agencies with the non-defense portion of the U.S. economy. Needless to say, a good many commercial firms unrelated to defense are going the same way.

The issue here is two-fold, I think, from speaking with individuals working these issues.

The first is, I think, that too much deference is being given to solution providers and some industry stakeholders, influenced by those providers, in “working the refs” through the data. The effect of doing so not only slows down the train and protects entrenched interests, it also gets in the way of innovation, allowing the slowest among the group to hold up the train in favor of–to put it bluntly–learning their jobs on the job at the expense of efficiency and effectiveness. As I expressed in a side conversion with an industry leader, all too often companies–who, after all, are the customer–have allowed themselves to view the possible by the limitations and inflexibility of their solution providers. At some point that dysfunctional relationship must end–and in the case of comments clearly identified as working the refs–they should be ignored. Put your stake in the ground and let innovation and market competition sort it out.

Secondly, cost estimating, which is closely tied to accounting and financial management, is new and considered tangential to other, more mature, performance management systems. My own firm is involved in producing a solution in support of this process, collecting data related to these reports (known collectively in DoD as the 1921 reports), and even after working to place that data in a common data lake, exploring with organizations what it tells us, since we are only now learning what it tells us. This is classical KDD–Knowledge Discovery in Data–and a worthwhile exercise.

I’ve also advocated going one step further in favor of the collection of financial performance data (known as the Contract Funds Status Report), which is an essential reporting requirement, but am frustrated to find no one willing to take ownership of the guidance regarding data collection. The tragedy here is that cost performance, known broadly as Earned Value Management, is a technique related to the value of work performance against other financial and project planning measures (a baseline and actuals). But in a business (or any enterprise), the fuel that drives the engine are finance-related, and two essential measures are margin and cash-flow. The CFSR is a report of program cash-flow and financial execution. It is an early measure of whether a program will execute its work in any given time-frame, and provides a reality check on the statistical measures of performance against baseline. It is also a necessary logic check for comptrollers and other budget decision-makers.

Thus, as it relates to data, there has been some push-back against a settled schema, where the government accepts flat files and converts the data to the appropriate format. I see this as an acceptable transient solution, but not an ultimate one. It is essential to collect both cost estimating and contract funds status information to perform any number of operations that relate to “actionable” intelligence: having the right executable money at the right time, a reality check against statistical and predictive measures, value analysis, and measures of ROI in development, just to name a few.

I look forward to continuing this conversation.

To Be or Not to Be Agile

The Section 809 Panel, which is the latest iteration of acquisition reform panels, has recommended that performance management using earned value not be mandated for efforts using Agile. It goes on, however, to assert that program executive “should approve appropriate project monitoring and control methods, which may include EVM, that provide faith in the quality of data and, at a minimum, track schedule, cost, and estimate at completion.”

Okay…the panel is then mute on what those monitoring and control measure will be. Significantly, if only subtly, the #NoEstimates crowd took a hit since the panel recommends and specifies data quality, schedule, cost and EAC. Sounds a lot like a form of EVM to me.

I must admit to be a skeptic when it comes to swallowing the Agile doctrine whole. Its micro-economic foundations are weak and much of it sounds like ideology–bad ideology at best and disproved ideology at worst (specifically related to the woo-woo about self-organization…think of the last speculative bubble and resulting financial crisis and depression along these lines).

When it comes to named methodologies I am somewhat from Missouri. I apply (and have in previous efforts in the Dark Ages back when I wore a uniform) applied Kanban, teaming, adaptive development (enhanced greatly today by using modern low-code technology), and short sprints that result in releasable modules. But keep in mind that these things were out there long before they were grouped under a common heading.

Perhaps Agile is now a convenient catch-all for best practices. But if that is the case then software development projects using this redefined version of Agile deserve no special dispensation. But I was schooled a bit by an Agile program manager during a side conversation and am always open to understanding things better and revising my perspectives. It’s just that there was never a Waterfall/Agile dichotomy just as there never really was a Spiral/Waterfall dichotomy. These were simply convenient development models to describe a process that were geared to the technology of the moment.

There are very good people on the job exploring these issues on the Agile Working Group in the IPMD and I look forward to seeing what they continue to come up with.

Rip Van Winkle Speaks!

The only disappointing presentation occurred on the second and last day of the meeting. It seemed we were treated by a voice from somewhere around the year 2003 that, in what can only be described as performance art involving free association, talked about wandering the desert, achieving certification for a piece of software (which virtually all of the software providers in the room have successfully navigated at one time or another), discovering that cost and schedule performance data can be integrated (ignoring the work of the last ten years on the part of, well, a good many people in the room), that there was this process known as the Integrated Baseline Review (which, again, a good many people in the room had collaborated on to both define and make workable), and–lo and behold–the software industry uses schemas and APIs to capture data (known in Software Development 101 as ETL). He then topped off his meander by an unethical excursion into product endorsement, selected through an opaque process.

For this last, the speaker was either unaware or didn’t care (usually called tone-deafness) that the event’s expenses were sponsored by a software solution provider (not mine). But it is also as if the individual speaking was completely unaware of the work behind the various many topics that I’ve listed above this subsection, ignoring and undermining the hard work of the other stakeholders that make up our community.

On the whole an entertaining bit of poppycock, which leads me to…

A Word about the Role of Professional Organizations (Somewhat Inside Baseball)

In this blog, and in my interactions with other professionals at–well–professional conferences–I check my self-interest in at the door and publicly take a non-commercial stance. It is a position that is expected and, I think, appreciated. For those who follow me on social networking like LinkedIn, posts from my WordPress blog originate from a separate source from the commercial announcements that are linked to my page that originate from my company.

If there are exhibitor areas, as some conferences and workshops do have, that is one thing. That’s where we compete and play; and in private side conversations customers and strategic partners will sometimes use the opportunity as a convenience to discuss future plans and specific issues that are clearly business-related. But these are the exceptions to the general rule, and there are a couple of reasons for this, especially at this venue.

One is because, given that while it is a large market, it is a small community, and virtually everyone at the regular meetings and conferences I attend already know that I am the CEO and owner of a small software company. But the IPMD is neutral ground. It is a place where government and industry stakeholders, who in other roles and circumstances are in a contractual or competing relationship, come to work out the best way of hashing out processes and procedures that will hopefully improve the discipline of program and project management. It is also a place of discovery, where policies, new ideas, and technologies can be vetted in an environment of collaboration.

Another reason for taking a neutral stance is simply because it is both the most ethical and productive one. Twenty years ago–and even in some of the intervening years–self-serving behavior was acceptable at the IPMD meetings where both leadership and membership used the venue as a basis for advancing personal agendas or those of their friends, often involving backbiting and character assassination. Some of those people, few in number, still attend these meetings.

I am not unfamiliar with the last–having been a target at one point by a couple of them but, at the end of the day, such assertions turned out to be without merit, undermining the credibility of the individuals involved, rightfully calling into question the quality of their character. Such actions cannot help but undermine the credibility and pollute the atmosphere of the organization in which they associate, as well.

Finally, the companies and organizations that sponsor these meetings–which are not cheap to organize, which I know from having done so in the past–deserve to have the benefit of acknowledgment. It’s just good manners to play nice when someone else is footing the bill–you gotta dance with those that brung you. I know my competitors and respect them (with perhaps one or two exceptions). We even occasionally socialize with each other and continue long-term friendships and friendly associations. Burning bridges is just not my thing.

On the whole, however, the NDIA IPMD meetings–and this one, in particular–was a productive and positive one, focused on the future and in professional development. That’s where, I think, that as a community we need to be and need to stay. I always learn something new and get my dose of reality from a broad-based perspective. In getting here the leadership of the organization (and the vast majority of the membership) is to be commended, as well as the recent past and current members of the Department of Defense, especially since the formation of the Performance Assessments and Root Cause Analysis (PARCA) office.

In closing, there were other items of note discussed, along with what can only be described as the best pair of keynote addresses that I’ve heard in one meeting. I’ll have more to say about some of the concepts and ideas that were presented there in future posts.

Here It Is–Integrated Project Management and Its Definition

I was recently at a customer site and, while discussing the topic of this post, I noticed a book displayed prominently on the bookshelf behind my colleague entitled “Project Management Using Earned Value” by Gary Humphreys. It is a book that I have on my shelf as well and is required reading by personnel in my company.

I told my colleague: “One of the problems with our ability to define IPM is the conceit embedded in the title of that book behind you.”

My colleague expressed some surprise at my intentionally provocative comment, but he too felt that EVM had taken on a role that was beyond its intent, and so asked for more clarification. Thus, this post is meant to flesh out some of these ideas.

But before I continue, here was my point: while the awkward wording of the title unintentionally creates a syllogism that can be read as suggesting that applying earned value will result in project management–a invalid conclusion based on a specious assumption–there are practitioners who would lend credence to that idea.

Some History in Full Disclosure

Before I begin some full disclosure is in order. When I was on active duty in the United State Navy I followed my last mentor to the Pentagon, who felt that my perspective on acquisition and program management would be best served on the staff of the Undersecretary of Defense for Acquisition and Technology, which subsequently also came to include logistics.

The presence of a uniformed member of the Armed Forces was unusual for that staff at the time (1996). My boss, Dan Czelusniak, a senior SES who was a highly respected leader, program manager, engineer, thought leader, and, for me, mentor, had first brought me on his staff at the U.S. Navy Naval Air Systems Command in PEO(A), and gave me free reign to largely define my job.

For that assignment I combined previously separate duties, taking on the job as Program Manager of an initiative to develop a methodology of assessing technical performance measurement, as Business Manager of the PEO which led me to the use of earned value management and its integration with other program indicators and systems, the development of a risk assessment system for the programs in the PEO for the establishment of a DoD financial management reserve, support to the program managers and their financial managers in the budget hearing process, and as CIO for the programs in identifying and introducing new information technologies in their support.

While there, I had decided to retire from the service after more than 22 years on active duty, but my superiors felt that I had a few more ideas to contribute to the DoD, and did what they could to convince me to stay on a while longer. Having made commitments in my transition, I set my date in the future, but agreed to do the obligatory Pentagon tour of duty to cap my career. Dan had moved over to Undersecretary of Defense for Acquisition and Technology (USD(A&T)) and decided that he wanted me on the OUSD(A&T) staff.

As in PEO(A), Mr. Czelusniak gave me the freedom to define my position with the approval of my immediate superior, Mr. Gary Christle. I chose the title as Lead Action Officer, Integrated Program Management. Mr. Christle, who was a brilliant public servant and thought leader as well, widely heralded in the EVM community, asked me with a bemused expression, “What is integrated program management?” I responded: “I don’t know yet sir but I intend to find out.” Though I did not have a complete definition, I had a seed of an idea.

My initiatives on the staff began with an exploration of data and information. My thinking along these lines early in my career were influenced by a book entitled “Logistics in the National Defense” by retired Admiral Henry E. Eccles, written in 1959. It is a work that still resonates today and established the important concept that “logistics serves as the bridge between a nation’s economy and its forces and defines the operational reach of the joint force commander.” The U.S. Army site referenced for this quote calls him the Clausewitz of logistics.

Furthermore my work as Program Manager of the Technical Performance Management project, and earlier assignments as program manager of IT and IM projects, provided me with insights into the interrelationships of essential data that was being collected as a matter of course in R&D efforts that would provide the basis for a definition of IPM.

In concluding my career on the OSD staff, I produced two main products, among others: a methodology for the integration of technical performance risk in project management performance, and the policy of moving toward what became the DoD-wide policy for an Integrated Digital Environment (IDE). This last initiative was produced with significant contributions from the staff of the Deputy Assistant Secretary of Defense for Command, Control, Communications, and Intelligence (C3I) as well as additional work by my colleague on the A&T staff, Reed White.

Products from IDE included the adoption of the ANSI X12 839 transaction set. Its successors, such as the DCARC UN/CEFACT XML and other similar initiatives are based on that same concept and policy, though, removed by many years, the individuals involved may be only vaguely aware of that early policy–or the controversy that had to be overcome in its publication given its relatively common sense aspects from today’s perspective.

The Present State

Currently there are at least four professional organizations that have attempted to tackle the issue of integrated program and project management. These are the Project Management Institute, NDIA’s Integrated Program Management Division, the College of Performance Management, and the American Association of Cost Engineers. There are also other groups focused in systems engineering, contracting, and cost estimating that contribute to the literature.

PMI is an expansive organization and, oftentimes, the focus of the group is on aspirational goals by those who wish to obtain a credential in the discipline. The other groups tend to emphasize their roots in earned value management or cost engineering as the basis for a definition of IPM. The frustration of many professionals in the A&D and DoD world is that the essential input and participation of the program manager to define the essential data needed to define IPM, which goes beyond the seas of separation that define islands of data and expertise is missing.

Things didn’t used to be this way.

When I served at NAVAIR and the Pentagon the jointly-sponsored fall Integrated Program Management Conference held in Tyson’s Corner, Virginia, would draw more than 600 attendees. Entire contingents from the military systems commands and program offices–as well as U.S. allied countries–would attend, lending the conference a synergy and forward-looking environment not found in other venues. Industries outside of aerospace and defense would also send representatives and contribute to the literature.

As anyone engaged in a scientific or engineering effort can attest, sharing expertise and perspectives among other like professionals from both industry and government is essential to developing a vital, professional, and up-to-date community of knowledge.

During the intervening years the overreaction of the public and the resulting political reaction to a few isolated embarrassing incidents at other professional conferences, and constraints on travel and training budgets, has contributed to a noticeable drop in attendance at these essential venues. But, I think there is also an internal contributing factor within the organizations themselves. That factor is that each views itself and its discipline as the nexus of IPM. Thus, to PMI, a collection of KPIs are the definition of IPM. To CPM and NDIA IPMD, earned value management is the link to IPM, and to AACEI Total Cost Management is the basis for IPM.

All of them cannot be correct and none possesses an overwhelming claim.

The present state currently finds members of each of these groups–all valuable subject matter experts, leaders,  and managers in their areas of concentration–essentially talking to themselves and each other, insulated in a bubble. There is little challenge in convincing another EVM SME that EVM is the basis for the integration of other disciplines. What is not being done is making a convincing case to program managers based on the merits.

A Modest Recommendation

Subsequent to the customer meeting that sent me to, once again, contemplate IPM, Gordon Kranz, President of Enlightened Integrated Program Management LLC posted the following question to LinkedIn:

Integrated Program Management – What is it?  Systems Engineering? Earned Value Management? Agile Development? Lean? Quality? Logistics? Building Information Modeling? …

He then goes on to list some basic approaches that may lead to answering that question. Still the question exists.

Mr. Kranz was the Deputy Director for Earned Value Management policy at the Office of the Secretary of Defense from 2011-2015. During his term in that position I witnessed more innovation and improved relations between government and industry, which resulted in process improvements in accountability and transparency, than I had seen come out of that office over the previous ten years. He brings with him a wealth of knowledge concerning program management from both government and private industry. Now a private consultant, Gordon’s question goes to the heart of the debate in addressing the claims of those who claim to be the nexus of IPM.

So what is Integrated Project or Program Management? Am I any closer to answering that question than when Gary Christle first asked it of me over twenty years ago?

I think so but I abstain in answering the question, but only because in the end it is the community of program management in their respective verticals that must ultimately answer it. Only the participation and perspectives of practicing program managers and corporate management will determine the definition of IPM and the elements that underlie it. Self-interested software publishers, of which I am one, cannot be allowed to define and frame the definition, as much as it is tempting to do so.

These elements must be specific and must address the most recent misunderstandings that have arisen in the PM discipline, such as that there is a dichotomy between EVM and Agile–a subject fit for a different blog post.

So here is my modest recommendation: that the leaders of the program management community from the acquisition organizations in both industry and government–where the real power to make decisions resides and where the discussions that sparked this blog post began–find a sponsor for an IPM workshop that addresses this topic, with the goal of answering the core question. Make no mistake–despite my deference in this post, I intend to be part of the conversation in defining that term. But, in my opinion, no one individual or small group of specialized SMEs are qualified to do so.

Furthermore, doing so, I believe, is essential to the very survival of these essential areas of expertise, particularly given our ability to deploy more powerful information systems that allow us to process larger sets of data. The paradox of more powerful processing of bigger data results in a level of precision that reveals the need for fewer, not more, predictive indicators and less isolated line-and-staff specialized expertise. Discovery-driven project management is here today, bridging islands of data, and providing intelligence in new and better ways that allow for a more systemic approach to project management.

Thus, in this context, a robust definition of Integrated Project Management is an essential undertaking for the discipline.

(Data) Transformation–Fear and Loathing over ETL in Project Management

ETL stands for data extract, transform, and load. This essential step is the basis for all of the new capabilities that we wish to acquire during the next wave of information technology: business analytics, big(ger) data, interdisciplinary insight into processes that provide insights into improving productivity and efficiency.

I’ve been dealing with a good deal of fear and loading regarding the introduction of this concept, even though in my day job my organization is a leading practitioner in the field in its vertical. Some of this is due to disinformation by competitors in playing upon the fears of the non-technically minded–the expected reaction of those who can’t do in the last throws of avoiding irrelevance. Better to baffle them with bullshit than with brilliance, I guess.

But, more importantly, part of this is due to the state of ETL and how it is communicated to the project management and business community at large. There is a great deal to be gained here by muddying the waters even by those who know better and have the technology. So let’s begin by clearing things up and making this entire field a bit more coherent.

Let’s start with the basics. Any organization that contains the interaction of people is a system. For purposes of a project management team, a business enterprise, or a governmental body we deal with a special class of systems known as Complex Adaptive Systems: CAS for short. A CAS is a non-linear learning system that reacts and evolves to its environment. It is complex because of the inter-relationships and interactions of more than two agents in any particular portion of the system.

I was first introduced to the concept of CAS through readings published out of the Santa Fe Institute in New Mexico. Most noteworthy is the work The Quark and the Jaguar by the physicist Murray Gell-Mann. Gell-Mann is received the Nobel in physics in 1969 for his work on elementary particles, such as the quark, and is co-founder of the Institute. He also was part of the team that first developed simulated Monte Carlo analysis during a period he spent at RAND Corporation. Anyone interested in the basic science of quanta and how the universe works that then leads to insights into subjects such as day-to-day probability and risk should read this book. It is a good popular scientific publication written by a brilliant mind, but very relevant to the subjects we deal with in project management and information science.

Understanding that our organizations are CAS allows us to apply all sorts of tools to better understand them and their relationship to the world at large. From a more practical perspective, what are the risks involved in the enterprise in which we are engaged and what are the probabilities associated with any of the range of outcomes that we can label as success. For my purposes, the science of information theory is at the forefront of these tools. In this world an engineer by the name of Claude Shannon working at Bell Labs essentially invented the mathematical basis for everything that followed in the world of telecommunications, generating, interpreting, receiving, and understanding intelligence in communication, and the methods of processing information. Needless to say, computing is the main recipient of this theory.

Thus, all CAS process and react to information. The challenge for any entity that needs to survive and adapt in a continually changing universe is to ensure that the information that is being received is of high and relevant quality so that the appropriate adaptation can occur. There will be noise in the signals that we receive. What we are looking for from a practical perspective in information science are the regularities in the data so that we can make the transformation of receiving the message in a mathematical manner (where the message transmitted is received) into the definition of information quality that we find in the humanities. I believe that we will find that mathematical link eventually, but there is still a void there. A good discussion of this difference can be found here in the on-line publication Double Dialogues.

Regardless of this gap, the challenge of those of us who engage in the business of ETL must bring to the table the ability not only to ensure that the regularities in the information are identified and transmitted to the intended (or necessary) users, but also to distinguish the quality of the message in the terms of the purpose of the organization. Shannon’s equation is where we start, not where we end. Given this background, there are really two basic types of data that we begin with when we look at a set of data: structured and unstructured data.

Structured data are those where the qualitative information content is either predefined by its nature or by a tag of some sort. For example, schedule planning and performance data, regardless of the idiosyncratic/proprietary syntax used by a software publisher, describes the same phenomena regardless of the software application. There are only so many ways to identify snow–and, no, the Inuit people do not have 100 words to describe it. Qualifiers apply in the humanities, but usually our business processes more closely align with statistical and arithmetic measures. As a result, structured data is oftentimes defined by its position in a hierarchical, time-phased, or interrelated system that contains a series of markers, indexes, and tables that allow it to be interpreted easily through the identification of a Rosetta stone, even when the system, at first blush, appears to be opaque. When you go to a book, its title describes what it is. If its content has a table of contents and/or an index it is easy to find the information needed to perform the task at hand.

Unstructured data consists of the content of things like letters, e-mails, presentations, and other forms of data disconnected from its source systems and collected together in a flat repository. In this case the data must be mined to recreate what is not there: the title that describes the type of data, a table of contents, and an index.

All data requires initial scrubbing and pre-processing. The difference here is the means used to perform this operation. Let’s take the easy path first.

For project management–and most business systems–we most often encounter structured data. What this means is that by understanding and interpreting standard industry terminology, schemas, and APIs that the simple process of aligning data to be transformed and stored in a database for consumption can be reduced to a systemic and repeatable process without the redundancy of rediscovery applied in every instance. Our business intelligence and business analytics systems can be further developed to anticipate a probable question from a user so that the query is pre-structured to allow for near immediate response. Further, structuring the user interface in such as way as to make the response to the query meaningful, especially integrated with and juxtaposed other types of data requires subject matter expertise to be incorporated into the solution.

Structured ETL is the place that I most often inhabit as a provider of software solutions. These processes are both economical and relatively fast, particularly in those cases where they are applied to an otherwise inefficient system of best-of-breed applications that require data transfers and cross-validation prior to official reporting. Time, money, and effort are all saved by automating this process, improving not only processing time but also data accuracy and transparency.

In the case of unstructured data, however, the process can be a bit more complicated and there are many ways to skin this cat. The key here is that oftentimes what seems to be unstructured data is only so because of the lack of domain knowledge by the software publisher in its target vertical.

For example, I recently read a white paper published by a large BI/BA publisher regarding their approach to financial and accounting systems. My own experience as a business manager and Navy Supply Corps Officer provide me with the understanding that these systems are highly structured and regulated. Yet, business intelligence publishers treated this data–and blatantly advertised and apparently sold as state of the art–an unstructured approach to mining this data.

This approach, which was first developed back in the 1980s when we first encountered the challenge of data that exceeded our expertise at the time, requires a team of data scientists and coders to go through the labor- and time-consuming process of pre-processing and building specialized processes. The most basic form of this approach involves techniques such as frequency analysis, summarization, correlation, and data scrubbing. This last portion also involves labor-intensive techniques at the microeconomic level such as binning and other forms of manipulation.

This is where the fear and loathing comes into play. It is not as if all information systems do not perform these functions in some manner, it is that in structured data all of this work has been done and, oftentimes, is handled by the database system. But even here there is a better way.

My colleague, Dave Gordon, who has his own blog, will emphasize that the identification of probable questions and configuration of queries in advance combined with the application of standard APIs will garner good results in most cases. Yet, one must be prepared to receive a certain amount of irrelevant information. For example, the query on Google of “Fun Things To Do” that you may use if you are planning for a weekend will yield all sorts of results, such as “50 Fun Things to Do in an Elevator.”  This result includes making farting sounds. The link provides some others, some of which are pretty funny. In writing this blog post, a simple search on Google for “Google query fails” yields what can only be described as a large number of query fails. Furthermore, this approach relies on the data originator to have marked the data with pointers and tags.

Given these different approaches to unstructured data and the complexity involved, there is a decision process to apply:

1. Determine if the data is truly unstructured. If the data is derived from a structured database from an existing application or set of applications, then it is structured and will require domain expertise to inherit the values and information content without expending unnecessary resources and time. A structured, systemic, and repeatable process can then be applied. Oftentimes an industry schema or standard can be leveraged to ensure consistency and fidelity.

2. Determine whether only a portion of the unstructured data is relative to your business processes and use it to append and enrich the existing structured data that has been used to integrate and expand your capabilities. In most cases the identification of a Rosetta Stone and standard APIs can be used to achieve this result.

3. For the remainder, determine the value of mining the targeted category of unstructured data and perform a business case analysis.

Given the rapidly expanding size of data that we can access using the advancing power of new technology, we must be able to distinguish between doing what is necessary from doing what is impressive. The definition of Big Data has evolved over time because our hardware, storage, and database systems allow us to access increasingly larger datasets that ten years ago would have been unimaginable. What this means is that–initially–as we work through this process of discovery, we will be bombarded with a plethora of irrelevant statistical measures and so-called predictive analytics that will eventually prove out to not pass the “so-what” test. This process places the users in a state of information overload, and we often see this condition today. It also means that what took an army of data scientists and developers to do ten years ago takes a technologist with a laptop and some domain knowledge to perform today. This last can be taught.

The next necessary step, aside from applying the decision process above, is to force our information systems to advance their processing to provide more relevant intelligence that is visualized and configured to the domain expertise required. In this way we will eventually discover the paradox that effectively accessing larger sets of data will yield fewer, more relevant intelligence that can be translated into action.

At the end of the day the manager and user must understand the data. There is no magic in data transformation or data processing. Even with AI and machine learning it is still incumbent upon the people within the organization to be able to apply expertise, perspective, knowledge, and wisdom in the use of information and intelligence.

Move It On Over — Third and Fourth Generation Software: A Primer

While presenting to organizations regarding business intelligence and project management solutions I often find myself explaining the current state of programming and what current technology brings to the table. Among these discussions is the difference between third and fourth generation software, not just from the perspective of programming–or the Wikipedia definition (which is quite good, see the links below)–but from a practical perspective.

Recently I ran into someone who asserted that their third-generation software solution was advantageous over a fourth generation one because it was “purpose built.” My response was that a fourth generation application provides multiple “purpose built” solutions from one common platform in a more agile and customer-responsive environment. For those unfamiliar with the differences, however, this simply sounded like a war of words rather than the substantive debate that it was.

For anyone who has used a software application they are usually not aware of the three basic logical layers that make up the solution. These are the business logic layer, the application layer, and the database structure. The user interface delivers the result of the interaction of these three layers to the user–what is seen on the screen.

Back during the early advent of the widespread use of PCs and distributed computing on centralized systems, a group of powerful languages were produced that allowed the machine operations to be handled by an operating system and for software developers to write code to focus on “purpose built” solutions.

Initially these efforts concentrated on automated highly labor-intensive activities to achieve maximum productivity gains in an organization, and to leverage those existing systems to distribute information that previously would require many hours of manual effort in terms of mathematical and statistical calculation and visualization. The solutions written were based on what were referred to as third generation languages, and they are familiar even to non-technical people: Fortran, Cobol, C+, C++, C#, and Java, among others. These languages are highly structured and require a good bit of expertise to correctly program.

In third generation environments, the coder specifies operations that the software must perform based on data structure, application logic, and pre-coded business logic.These three levels of highly integrated and any change in one of them requires that the programmer trace the impact of that change to ensure that the operations in the other two layers are not affected. Oftentimes, the change has a butterfly effect, requiring detailed adjustments to take into the account the subtleties in processing. It is this highly structured, interdependent, “purpose built” structure that causes unanticipated software bugs to pop up in most applications. It is also the reason why software development and upgrade configuration control is also highly structured and time-consuming–requiring long lead-times to even deliver what most users view as relatively mundane changes and upgrades, like a new chart or graph.

In contrast, fourth generation applications separate the three levels and control the underlying behavior of the operating environment by leveraging a standard framework, such as .NET. The .NET operating environment, for example, controls both a library of interoperability across programming languages (known as a Framework Class Library or FCL), and virtual machine that handles exception handling, memory management, and other common functions (known as Common Language Runtime or CLR).

With the three layers separated, with many of the more mundane background tasks being controlled by the .NET framework, a great deal of freedom is provided to the software developer that provides real benefits to customers and users.

For example, the database layer is freed from specific coding from the application layer, since the operating environment allows libraries of industry standard APIs to be leveraged, making the solution agnostic to data. Furthermore, the business logic/UI layer allows for table-driven and object-oriented configuration that creates a low code environment, which not only allows for rapid roll-out of new features and functionality (since hard-coding across all three layers is eschewed), but also allows for more precise targeting of functionality based on the needs of user groups (or any particular user).

This is what is meant in previous posts by new technology putting the SME back in the driver’s seat, since pre-defined reports and objects (GUIs) at the application layer allow for immediate delivery of functionality. Oftentimes data from disparate data sources can be bound together through simple query languages and SQL, particularly if the application layer table and object functionality is built well enough.

When domain knowledge is incorporated into the business logic layer, the distinction between generic BI and COTS is obliterated. Instead, what we have is a hybrid approach that provides the domain specificity of COTS (‘purpose built”), with the power of BI that reduces the response time between solution design and delivery. More and better data can also be accessed, establishing an environment of discovery-driven management.

Needless to say, properly designed Fourth Generation applications are perfectly suited to rapid application development and deployment approaches such as Agile. They also provide the integration potential, given the agnosticism to data, that Third Generation “purpose built” applications can only achieve through data transfer and reconciliation across separate applications that never truly achieve integration. Instead, Fourth Generation applications can subsume the specific “purpose built” functionality found in stand-alone applications and deliver it via a single platform that provides one source of truth, still allowing for different interpretations of the data through the application of differing analytical approaches.

So move it on over nice (third generation) dog, a big fat (fourth generation) dog is moving in.

Friday Hot Washup: Daddy Stovepipe sings the Blues, and Net Neutrality brought to you by Burger King

Daddy Stovepipe sings the Blues — Line and Staff Organizations (and how they undermine organizational effectiveness)

In my daily readings across the web I came upon this very well written blog post by Glen Alleman at his Herding Cat’s blog. The eternal debate in project management surrounds when done is actually done–and what is the best measurement of progress toward the completion of the end item application?

Glen rightly points to the specialization among SMEs in the PM discipline, and the differences between their methods of assessment. These centers of expertise are still aligned along traditional line and staff organizations that separate scheduling, earned value, system engineering, financial management, product engineering, and other specializations.

I’ve written about this issue where information also follows these stove-piped pathways–multiple data streams with overlapping information, but which resists effective optimization and synergy because of the barriers between them. These barriers may be social or perceptual, which then impose themselves upon the information systems that are constructed to support them.

The manner in which we face and interpret the world is the core basis of epistemology. When we develop information systems and analytical methodologies, whether we are consciously aware of it or not, we delve into the difference between justified belief and knowledge. I see the confusion of these positions in daily life and in almost all professions and disciplines. In fact, most of us find ourselves jumping from belief to knowledge effortlessly without being aware of this internal contradiction–and the corresponding reduction in our ability to accurately perceive reality.

The ability to overcome our self-imposed constraints is the key but, I think, our PM organizational structures must be adjusted to allow for the establishment of a learning environment in relation to data. The first step in this evolution must be the mentoring and education of a discipline that combines these domains. What this proposes is that no one individual need know everything about EVM, scheduling, systems engineering, and financial management. But the business environment today is such, if the business or organization wishes to be prepared for the world ahead, to train transition personnel toward a multi-disciplinary project management competency.

I would posit, contrary to Glen’s recommendation, that no one discipline claim to be the basis for cross-functional integration, only because it may be a self-defeating one. In the book Networks, Crowds, and Markets: Reasoning about a Highly Connected World by David Easley and Jon Kleinberg of Cornell, our social systems are composed of complex networks, but where negative perceptions develop when the network is no longer considered in balance. This subtle and complex interplay of perceptions drive our ability to work together.

It also affects whether we will stay safe the comfort zone of having our information systems tell us what we need to analyze, or whether we apply a more expansive view of leveraging new information systems that are able to integrate ever expanding sets of relevant data to give us a more complete picture of what constitutes “done.”

Hold the Pickle, Hold the Lettuce, Special Orders Don’t Upset Us: Burger King explains Net Neutrality

The original purpose of the internet has been the free exchange of ideas and knowledge. Initially, under ARPANET, Lawrence Roberts and later Bob Kahn, the focus was on linking academic and research institutions so that knowledge could be shared resulting in collaboration that would overcome geographical barriers. Later the Department of Defense, NASA, and other government organizations highly dependent on R&D were brought into the new internet community.

To some extent there still are pathways within what is now broadly called the Web, to find and share such relevant information with these organizations. With the introduction of commercialization in the early 1990s, however, it has been increasingly hard to perform serious research.

For with the expansion of the internet to the larger world, the larger world’s dysfunctions and destructive influences also entered. Thus, the internet has transitioned from a robust First Amendment free speech machine to a place that also harbors state-sponsored psy-ops and propaganda. It has gone form a safe space for academic freedom and research to a place of organized sabotage, intrusion, theft, and espionage. It has transitioned from a highly organized professional community that hewed to ethical and civil discourse, to one that harbors trolls, prejudice, hostility, bullying, and other forms of human dysfunction. Finally and most significantly, it has become dominated by commercial activity, dominated by high tech giants that stifle innovation, and social networking sites that also allow, applying an extreme Laissez-faire attitude, magnify and spread the more dysfunctional activities found in the web as a whole.

At least for those who still looked to the very positive effects of the internet there was net neutrality. The realization that blogs like this one and the many others that I read on a regular basis, including mainstream news, and scientific journals still were available without being “dollarized” in the words of the naturalist John Muir.

Unfortunately this is no longer the case, or will no longer be the case, perhaps, when the legal dust settles. Burger King has placed it marker down and it is a relevant and funny one. Please enjoy and have a great weekend.

 

Learning the (Data) — Data-Driven Management, HBR Edition

The months of December and January are usually full of reviews of significant events and achievements during the previous twelve months. Harvard Business Review makes the search for some of the best writing on the subject of data-driven transformation by occasionally publishing in one volume the best writing on a critical subject of interest to professional through the magazine OnPoint. It is worth making part of your permanent data management library.

The volume begins with a very concise article by Thomas C. Redman with the provocative title “Does Your Company Know What to Do with All Its Data?” He then goes on to list seven takeaways of optimizing the use of existing data that includes many of the themes that I have written about in this blog: better decision-making, innovation, what he calls “informationalize products”, and other significant effects. Most importantly, he refers to the situation of information asymmetry and how this provides companies and organizations with a strategic advantage that directly affects the bottom line–whether that be in negotiations with peers, contractual relationships, or market advantages. Aside from the OnPoint article, he also has some important things to say about corporate data quality. Highly recommended and a good reason to implement systems that assure internal information systems fidelity.

Edd Wilder-James also covers a theme that I have hammered home in a number of blog posts in the article “Breaking Down Data Silos.” The issue here is access to data and the manner in which it is captured and transformed into usable analytics. His recommended approach to a task that is often daunting is to find the path of least resistance in finding opportunities to break down silos and maximize data to apply advanced analytics. The article provides a necessary balm that counteracts the hype that often accompanies this topic.

Both of these articles are good entrees to the subject and perfectly positioned to prompt both thought and reflection of similar experiences. In my own day job I provide products that specifically address these business needs. Yet executives and management in all too many cases continue to be unaware of the economic advantages of data optimization or the manner in which continuing to support data silos is limiting their ability to effectively manage their organizations. There is no doubt that things are changing and each day offers a new set of clients who are feeling their way in this new data-driven world, knowing that the promises of almost effort-free goodness and light by highly publicized data gurus are not the reality of practitioners, who apply the detail work of data normalization and rationalization. At the end it looks like magic, but there is effort that needs to be expended up-front to get to that state. In this physical universe under the Second Law of Thermodynamics there are no free lunches–energy must be borrowed from elsewhere in order to perform work. We can minimize these efforts through learning and the application of new technology, but managers cannot pretend not to have to understand the data that they intend to use to make business decisions.

All of the longer form articles are excellent, but I am particularly impressed with the Leandro DalleMule and Thomas H. Davenport article entitled “What’s Your Data Strategy?” from the May-June 2017 issue of HBR. Oftentimes when addressing big data at professional conferences and in visiting businesses the topic often runs to the manner of handling the bulk of non-structured data. But as the article notes, less than half of an organization’s relevant structured data is actually used in decision-making. The most useful artifact that I have permanently plastered at my workplace is the graphic “The Elements of Data Strategy”, and I strongly recommend that any manager concerned with leveraging new technology to optimize data do the same. The graphic illuminates the defensive and offensive positions inherent in a cohesive data strategy leading an organization to the state: “In our experience, a more flexible and realistic approach to data and information architectures involves both a single source of truth (SSOT) and multiple versions of the truth (MVOTs). The SSOT works at the data level; MVOTs support the management of information.” Elimination of proprietary data silos, elimination of redundant data streams, and warehousing of data that is accessed using a number of analytical methods achieve the necessary states of SSOT that provides the basis for an environment supporting MVOTs.

The article “Why IT Fumbles Analytics” by Donald A. Marchand and Joe Peppard from 2013, still rings true today. As with the article cited above by Wilder-James, the emphasis here is with the work necessary to ensure that new data and analytical capabilities succeed, but the emphasis shifts to “figuring out how to use the information (the new system) generates to make better decisions or gain deeper…insights into key aspects of the business.” The heart of managing the effort in providing this capability is to put into place a project organization, as well as systems and procedures, that will support the organizational transformation that will occur as a result of the explosion of new analytical capability.

The days of simply buying an off-the-shelf silo-ed “tool” and automating a specific manual function are over, especially for organizations that wish to be effective and competitive–and more profitable–in today’s data and analytical environment. A more comprehensive and collaborative approach is necessary. As with the DalleMule and Davenport article, there is a very useful graphic that contrasts traditional IT project approaches against Analytics and Big Data (or perhaps “Bigger” Data) Projects. Though the prescriptions in the article assume an earlier concept of Big Data optimization focused on non-structured data, thereby making some of these overkill, an implementation plan is essential in supporting the kind of transformation that will occur, and managers act at their own risk if they fail to take this effect into account.

All of the other articles in this OnPoint issue are of value. The bottom line, as I have written in the past, is to keep a focus on solving business challenges, rather than buying the new bright shiny object. Alternatively, in today’s business environment the day that business decision-makers can afford to stay within their silo-ed comfort zone are phasing out very quickly, so they need to shift their attention to those solutions that address these new realities.

So why do this apart from the fancy term “data optimization”? Well, because there is a direct return-on-investment in transforming organizations and systems to data-driven ones. At the end of the day the economics win out. Thus, our organizations must be prepared to support and have a plan in place to address the core effects of new data-analytics and Big Data technology:

a. The management and organizational transformation that takes place when deploying the new technology, requiring proactive socialization of the changing environment, the teaching of new skill sets, new ways of working, and of doing business.

b. Supporting transformation from a sub-optimized silo-ed “tell me what I need to know” work environment to a learning environment, driven by what the data indicates, supporting the skills cited above that include intellectual curiosity, engaging domain expertise, and building cross-domain competencies.

c. A practical plan that teaches the organization how best to use the new capability through a practical, hands-on approach that focuses on addressing specific business challenges.