Ground Control from Major Tom — Breaking Radio Silence: New Perspectives on Project Management

Since I began this blog I have used it as a means of testing out and sharing ideas about project management, information systems, as well to cover occasional thoughts about music, the arts, and the meaning of wisdom.

My latest hiatus from writing was due to the fact that I was otherwise engaged in a different sort of writing–tech writing–and in exploring some mathematical explorations related to my chosen vocation, aside from running a business and–you know–living life.  There are only so many hours in the day.  Furthermore, when one writes over time about any one topic it seems that one tends to repeat oneself.  I needed to break that cycle so that I could concentrate on bringing something new to the table.  After all, it is not as if this blog attracts a massive audience–and purposely so.  The topics on which I write are highly specialized and the members of the community that tend to follow this blog and send comments tend to be specialized as well.  I air out thoughts here that are sometimes only vaguely conceived so that they can be further refined.

Now that that is out of the way, radio silence is ending until, well, the next contemplation or massive workload that turns into radio silence.

Over the past couple of months I’ve done quite a bit of traveling, and so have some new perspectives that and trends that I noted and would like to share, and which will be the basis (in all likelihood) of future, more in depth posts.  But here is a list that I have compiled:

a.  The time of niche analytical “tools” as acceptable solutions among forward-leaning businesses and enterprises is quickly drawing to a close.  Instead, more comprehensive solutions that integrate data across domains are taking the market and disrupting even large players that have not adapted to this new reality.  The economics are too strong to stay with the status quo.  In the past the barrier to integration of more diverse and larger sets of data was the high cost of traditional BI with its armies of data engineers and analysts providing marginal value that did not always square with the cost.  Now virtually any data can be accessed and visualized.  The best solutions, providing pre-built domain knowledge for targeted verticals, are the best and will lead and win the day.

b.  Along these same lines, apps and services designed around the bureaucratic end-of-month chart submission process are running into the new paradigm among project management leaders that this cycle is inadequate, inefficient, and ineffective.  The incentives are changing to reward actual project management in lieu of project administration.  The core fallacy of apps that provide standard charts based solely on user’s perceptions of looking at data is that they assume that the PM domain knows what it needs to see.  The new paradigm is instead to provide a range of options based on the knowledge that can be derived from data.  Thus, while the options in the new solutions provide the standard charts and reports that have always informed management, KDD (knowledge discovery in database) principles are opening up new perspectives in understanding project dynamics and behavior.

c.  Earned value is *not* the nexus of Integrated Project Management (IPM).  I’m sure many of my colleagues in the community will find this statement to be provocative, only because it is what they are thinking but have been hesitant to voice.  A big part of their hesitation is that the methodology is always under attack by those who wish to avoid accountability for program performance.  Thus, let me make a point about Earned Value Management (EVM) for clarity–it is an essential methodology in assessing project performance and the probability of meeting the constraints of the project budget.  It also contributes data essential to project predictive analytics.  What the data shows from a series of DoD studies (currently sadly unpublished), however, is that it is planning (via a Integrated Master Plan) and scheduling (via an Integrated Master Schedule) that first ties together the essential elements of the project, and will record the baking in of risk within the project.  Risk manifested in poorly tying contract requirements, technical performance measures, and milestones to the plan, and then manifested in poor execution will first be recorded in schedule (time-based) performance.  This is especially true for firms that apply resource-loading in their schedules.  By the time this risk translates and is recorded in EVM metrics, the project management team is performing risk handling and mitigation to blunt the impact on the performance management baseline (the money).  So this still raises the question: what is IPM?  I have a few ideas and will share those in other posts.

d.  Along these lines, there is a need for a Schedule (IMS) Gold Card that provides the essential basis of measurement of programmatic risk during project execution.  I am currently constructing one with collaboration and will put out a few ideas.

e.  Finally, there is still room for a lot of improvement in project management.  For all of the gurus, methodologies, consultants, body shops, and tools that are out there, according to PMI, more than a third of projects fail to meet project goals, almost half to meet budget expectations, less than half finished on time, and almost half experienced scope creep, which, I suspect, probably caused “failure” to be redefined and under-reported in their figures.  The assessment for IT projects is also consistent with this report, with CIO.com reporting that more than half of IT projects fail in terms of meeting performance, cost, and schedule goals.  From my own experience and those of my colleagues, the need to solve the standard 20-30% slippage in schedule and similar overrun in costs is an old refrain.  So too is the frustration that it need take 23 years to deploy a new aircraft.  A .5 CPI and SPI (to use EVM terminology) is not an indicator of success.  What this indicates, instead, is that there need to be some adjustments and improvements in how we do business.  The first would be to adjust incentives to encourage and reward the identification of risk in project performance.  The second is to deploy solutions that effectively access and provide information to the project team that enable them to address risk.  As with all of the points noted in this post, I have some other ideas in this area that I will share in future posts.

Onward and upward.

Days of Future Passed — Legacy Data and Project Parametrics

I’ve had a lot of discussions lately on data normalization, including being asked the question of what constitutes normalization when dealing with legacy data, specifically in the field of project management.  A good primer can be found at About.com, but there are also very good older papers out on the web from various university IS departments.  The basic principals of data normalization today consist of finding a common location in the database for each value, reducing redundancy, properly establishing relationships among the data elements, and providing flexibility so that the data can be properly retrieved and further processed into intelligence in such as way as the objects produced possess significance.

The reason why answering this question is so important is because our legacy data is of such a size and of such complexity that it falls into the broad category of Big Data.  The condition of the data itself provides wide variations in terms of quality and completeness.  Without understanding the context, interrelationships, and significance of the elements of the data, the empirical approach to project management is threatened, since being able to use this data for purposes of establishing trends and parametric analysis is limited.

A good paper that deals with this issue was authored by Alleman and Coonce, though it was limited to Earned Value Management (EVM).  I would argue that EVM, especially in the types of industries in which the discipline is used, is pretty well structured already.  The challenge is in the other areas that are probably of more significance in getting a fuller understanding of what is happening in the project.  These areas of schedule, risk, and technical performance measures.

In looking at the Big Data that has been normalized to date–and I have participated with others in putting a significant dent in this area–it is apparent that processes in these other areas lack discipline, consistency, completeness, and veracity.  By normalizing data in sub-specialties that have experienced an erosion in enforcing standards of quality and consistency, technology becomes a driver for process improvement.

A greybeard in IT project management once said to me (and I am not long in joining that category): “Data is like water, the more it flows downstream the cleaner it becomes.”  What he meant is that the more that data is exposed in the organizational stream, the more it is questioned and becomes a part of our closed feedback loop: constantly being queried, verified, utilized in decision making, and validated against reality.  Over time more sophisticated and reliable statistical methods can be applied to the data, especially if we are talking about performance data of one sort or another, that takes periodic volatility into account in trending and provides us with a means for ensuring credibility in using the data.

In my last post on Four Trends in Project Management, I posited that the question wasn’t more or less data but utilization of data in a more effective manner, and identifying what is significant and therefore “better” data.  I recently heard this line repeated back to me as a means of arguing against providing data.  This conclusion was a misreading of what I was proposing.  One level of reporting data in today’s environment is no more work than reporting on any other particular level of a project hierarchy.  So cost is no longer a valid point for objecting to data submission (unless, of course, the one taking that position must admit to the deficiencies in their IT systems or the unreliability of their data).

Our projects must be measured against the framing assumptions in which they were first formed, as well as the established measures of effectiveness, measures of performance, and measures of technical achievement.  In order to view these factors one must have access to data originating from a variety of artifacts: the Integrated Master Schedule, the Schedule and Cost Risk Analysis, and the systems engineering/technical performance plan.  I would propose that project financial execution metrics are also essential in getting a complete, integrated, view of our projects.

There may be other supplemental data that is necessary as well.  For example, the NDIA Integrated Program Management Division has a proposed revision to what is known as the Integrated Baseline Review (IBR).  For the uninitiated, this is a process in which both the supplier and government customer project teams can come together, review the essential project artifacts that underlie project planning and execution, and gain a full understanding of the project baseline.  The reporting systems that identify the data that is to be reported against the baseline are identified and verified at this review.  But there are also artifacts submitted here that contain data that is relevant to the project and worthy of continuing assessment, precluding manual assessments and reviews down the line.

We don’t yet know the answer to these data issues and won’t until all of the data is normalized and analyzed.  Then the wheat from the chaff can be separated and a more precise set of data be identified for submittal, normalized and placed in an analytical framework to give us more precise information that is timely so that project stakeholders can make decisions in handling any risks that manifest themselves during the window that they can be handled (or make the determination that they cannot be handled).  As the farmer says in the Chinese proverb:  “We shall see.”

Synchroncity — What is proper schedule and cost integration?

Much has been said about the achievement of schedule and cost integration (or lack thereof) in the project management community.  Much of it consists of hand waving and magic asterisks that hide the significant reconciliation that goes on behind the scenes.  From an intellectually honest approach that does not use the topic as a means of promoting a proprietary solution is that authored by Rasdorf and Abudayyeah back in 1991 entitled, “Cost and Schedule Control Integration: Issues and Needs.”

It is worthwhile revisiting this paper, I think, because it was authored in a world not yet fully automated, and so is immune to the software tool-specific promotion that oftentimes dominates the discussion.  In their paper they outlined several approaches to breaking down cost and work in project management in order to provide control and track performance.  One of the most promising methods that they identified at the time was the unified approach that had originated in aerospace, in which a work breakdown structure (WBS) is constructed based on discrete work packages in which budget and schedule are unified at a particular level of detail to allow for full control and traceability.

The concept of the WBS and its interrelationship to the organizational breakdown structure (OBS) has become much more sophisticated over the years, but there has been a barrier that has caused this ideal to be fully achieved.  Ironically it is the introduction of technology that is the culprit.

During the first phase of digitalization that occurred in the project management industry not too long after Radsdorf and Abudayyeah published their paper, there was a boom in dot coms.  For business and organizations the practice was to find a specialty or niche and fill it with an automated solution to take over the laborious tasks of calculation previously achieved by human intervention.  (I still have both my slide rule and first scientific calculator hidden away somewhere, though I have thankfully wiped square root tables from my memory).

For those of us who worked in project and acquisition management, our lives were built around the 20th century concept of division of labor.  In PM this meant we had cost analysts, schedule analysts, risk analysts, financial analysts and specialists, systems analysts, engineers broken down by subspecialties (electrical, mechanical, systems, aviation) and sub-subspecialties (Naval engineers, aviation, electronics and avionics, specific airframes, software, etc.).  As a result, the first phase of digitization followed the pathway of the existing specialties, finding niches in which to inhabit, which provided a good steady and secure living to software companies and developers.

For project controls, much of this infrastructure remains in place.  There are entire organizations today that will construct a schedule for a project using one set of specialists and the performance management baseline (PMB) in another, and then reconciling the two, not just in the initial phase of the project, but across the entire life of the project.  From the standard of the integrated structure that brings together cost and schedule this makes no practical sense.  From a business efficiency perspective this is an unnecessary cost.

As much as it is cited by many authors and speakers, the Coopers & Lybrand with TASC, Inc. paper entitled “The DoD Regulatory Cost Premium” is impossible to find on-line.  Despite its widespread citation the study demonstrated that by the time one got down to the third “cost” driver due to regulatory requirements that the projected “savings” was a fraction of 1% of the total contract cost.  The interesting issue not faced by the study is, were the tables turned, how much would such contracts be reduced if all management controls in the company were reduced or eliminated since they contribute as elements to overhead and G&A?  More to the point here, if the processes applied by industry were optimized what would the be the cost savings involved?

A study conduct by RAND Corporation in 2006 accurately points out that a number of studies had been conducted since 1986, all of which promised significant impacts in terms of cost savings by focusing on what were perceived as drivers for unnecessary costs.  The Department of Defense and the military services in particular took the Coopers & Lybrand study very seriously because of its methodology, but achieved minimal savings against those promised.  Of course, the various studies do not clearly articulate the cost risk associated with removing the marginal cost of oversight and regulation. Given our renewed experience with lack of regulation in the mortgage and financial management sectors of the economy that brought about the worst economic and financial collapse since 1929, one my look at these various studies in a new light.

The RAND study outlines the difficulties in the methodologies and conclusions of the studies undertaken, especially the acquisition reforms initiated by DoD and the military services as a result of the Coopers & Lybrand study.  But, how, you may ask does this relate to cost and schedule integration?

The present means that industry uses in many places takes a sub-optimized approach to project management, particularly when it applies to cost and schedule integration, which really consists of physical cost and schedule reconciliation.  A system is split into two separate entities, though they are clearly one entity, constructed separately, and then adjusted using manual intervention which defeats the purpose of automation.  This may be common practice but it is not best practice.

Government policy, which has pushed compliance to the contractor, oftentimes rewards this sub-optimization and provides little incentive to change the status quo.  Software industry manufacturers who are embedded with old technologies are all too willing to promote the status quo–appropriating the term “integration” while, in reality, offering interfaces and workarounds after the fact.  Those personnel residing in line and staff positions defined by the mid-20th century approach of division of labor are all too happy to continue operating using outmoded methods and tools.  Paradoxically these are personnel in industry that would never advocate using outmoded airframes, jet engines, avionics, or ship types.

So it is time to stop rewarding sub-optimization.  The first step in doing this is through the normalization of data from these niche proprietary applications and “rewiring” them at the proper level of integration so that the systemic faults can be viewed by all stakeholders in the oversight and regulatory chain.  Nothing seems to be more effective in correcting a hidden defect than some sunshine and a fresh set of eyes.

If industry and government are truly serious about reforming acquisition and project management in order to achieve significant cost savings in the face of tight budgets and increasing commitments due to geopolitical instability, then systemic reforms from the bottom up are the means to achieve the goal; not the elimination of controls.  As John Kennedy once said in paraphrasing Chesterton, “Don’t take down a fence unless you know why it was put up.”  The key is not to undermine the strength and integrity of the WBS-based approach to project control and performance measurement (or to eliminate it), but to streamline it so that it achieves its ideal as closely as our inherently faulty tools and methods will allow.

 

Go With the Flow — What is a Better Indicator: Earned Value or Cash Flow?

A lot of ink has been devoted to what constitutes “best practices” in PM but oftentimes these discussions tend to get diverted into overtly commercial activities that promote a set of products or trademarked services that in actuality are well-trod project management techniques given a fancy name or acronym.  We see this often with “road shows” and “seminars” that are blatant marketing events.  This tends to undermine the desire of PM professionals to find out what really gives us good information by both getting in the way of new synergies and by tying “best practices” to proprietary solutions.  All too often “common practice” and “proprietary limitations” pass for “best practice.”

Recently I have been involved in discussions and the formulation of guides on indicators that tell us something important regarding the condition of the project throughout its life cycle.  All too often the conversation settles on earned value with the proposition that all indicators lead back to it.  But this is an error since it is but one method for determining performance, which looks solely at one dimension of the project.

There are, after all, other obvious processes and plans that measure different dimensions of project performance.  The first such example is schedule performance.  A few years ago there was an attempt to more closely tie schedule and cost as an earned value metric, which was and is called “earned schedule.”  In particular, it had many strengths against what was posited as its alternative–schedule variance as calculated by earned value.  But both are a misnomer, even when earned schedule is offered as an alternative to earned value while at the same time adhering to its methods.  Neither measures schedule, that is, time-based performance against a plan consisting of activities.  The two artifacts can never be reconciled and reduced to one metric because they measure different things.  The statistical measure that would result would have no basis in reality, adding an unnecessary statistical layer that obfuscates instead of clarifying the underlying condition. So what do we look at you may ask?  Well–the schedule.  The schedule itself contains many opportunities to measure its dimension in order to develop useful metrics and indicators.

For example, a number of these indicators have been in place for quite some time: Baseline Execution Index (BEI), Critical Path Length Index (CPLI), early start/late start, early finish/late finish, bow-wave analysis, hit-miss indices, etc.  These all can be found in the literature, such as here and here and here.

Typically, then, the first step toward integration is tying these different metrics and indicators of the schedule and EVM dimensions at an appropriate level through the WBS or other structures.  The juxtaposition of these differing dimensions, particularly in a grid or GANTT, gives us the ability to determine if there is a correlation between the various indicators.  We can then determine–over time–the strength and consistency of the various correlations.  Further, we can take this one step further to conclude which ones lead us to causation.  Only then do we get to “best practice.”  This hard work to get to best practice is still in its infancy.

But this is only the first step toward “integrated” performance measurement.  There are other areas of integration that are needed to give us a multidimensional view of what is happening in terms of project performance.  Risk is certainly one additional area–and a commonly heard one–but I want to take this a step further.

For among my various jobs in the past included business management within a project management organization.  This usually translated into financial management, but not traditional financial management that focuses on the needs of the enterprise.  Instead, I am referring to project financial management, which is a horse of a different color, since it is focused at the micro-programmatic level on both schedule and resource management, given that planned activities and the resources assigned to them must be funded.

Thus, having the funding in place to execute the work is the antecedent and, I would argue, the overriding factor to project success.  Outside of construction project management, where the focus on cash-flow is a truism, we see this play out in publicly funded project management through the budget hearing process.  Even when we are dealing with multiyear R&D funding the project goes through this same process.  During each review, financial risk is assessed to ensure that work is being performed and budget (program) is being executed.  Earned value will determine the variance between the financial plan and the value of the execution, but the level of funding–or cash flow–will determine what gets done during any particular period of time.  The burn rate (expenditure) is the proof that things are getting done, even if the value may be less than what is actually expended.

In public funding of projects, especially in A&D, the proper “color” of money (R&D, Operations & Maintenance, etc.) available at the right time oftentimes is a better predictor of project success than the metrics and indicators which assume that the planned budget, schedule, and resources will be provided to support the baseline.  But things change, including the appropriation and release of funds.  As a result, any “best practice” that confines itself to only one or two of the dimensions of project assessment fail to meet the definition.

In the words of Gus Grissom in The Right Stuff, “No bucks, no Buck Rogers.”

 

I Can’t Get No (Satisfaction) — When Software Tools Go Bad

Another article I came across a couple of weeks ago that my schedule prevented me from highlighting was by Michelle Symonds at PM Hut entitled “5 Tell-Tale Signs That You Need a Better Project Management Tool.”  According to Ms. Symonds, among these signs are:

a.  Additional tools are needed to achieve the intended functionality apart from the core application;

b.  Technical support is poor or nonexistent;

c.  Personnel in the organization still rely on spreadsheets to extend the functionality of the application;

d.  Training on the tool takes more time than training the job;

e.  The software tool adds work instead of augmenting or facilitating the achievement of work.

I have seen situations where all of these conditions are at work but the response, in too many cases, has been “well we put so much money into XYZ tool with workarounds and ‘bolt-ons’ that it will be too expensive/disruptive to change.”  As we have advanced past the first phases of digitization of data, it seems that we are experiencing a period where older systems do not quite match up with current needs, but that software manufacturers are very good at making their products “sticky,” even when their upgrades and enhancements are window dressing at best.

In addition, the project management community, particularly focused on large projects in excess of $20M is facing the challenge of an increasingly older workforce.  Larger economic forces at play lately have exacerbated this condition.  Aggregate demand and, on the public side, austerity ideology combined with sequestration, has created a situation where highly qualified people are facing a job market characterized by relatively high unemployment, flat wages and salaries, depleted private retirement funds, and constant attacks on social insurance related to retirement.  Thus, people are hanging around longer, which limits opportunities for newer workers to grow into the discipline.  Given these conditions, we find that it is very risky to one’s employment prospects to suddenly forge a new path.  People in the industry that I have known for many years–and who were always the first to engage with new technologies and capabilities–are now very hesitant to do so now.  Some of this is well founded through experience and consists of healthy skepticism: we all have come across snake oil salesmen in our dealings at one time or another, and even the best products do not always make it due to external forces or the fact that brilliant technical people oftentimes are just not very good at business.

But these conditions also tend to hold back the ability of the enterprise to implement efficiencies and optimization measures that otherwise would be augmented and supported by appropriate technology.  Thus, in addition to those listed by Ms. Symonds, I would include the following criteria to use in making the decision to move to a better technology:

a.  Sunk and prospective costs.  Understand and apply the concepts of sunk cost and prospective cost.  The first is the cost that has been expended in the past, while the latter focuses on the investment necessary for future growth, efficiencies, productivity, and optimization.  Having made investments to improve a product in the past is not an argument for continuing to invest in the product in the future that trumps other factors.  Obviously, if the cash flow is not there an organization is going to be limited in the capital and other improvements it can make but, absent those considerations, sunk cost arguments are invalid.  It is important to invest in those future products that will facilitate the organization achieving its goals in the next five or ten years.

b.  Sustainability.  The effective life of the product must be understood, particularly as it applies to an organization’s needs.  Some of this overlaps the points made by Ms. Symonds in her article but is meant to apply in a more strategic way.  Every product, even software, has a limited productive life but my concept here goes to what Glen Alleman pointed out in his blog as “bounded applicability.”  Will the product require more effort in any form where the additional effort provides a diminishing return?  For example, I have seen cases where software manufacturers, in order to defend market share, make trivial enhancements such as adding a chart or graph in order to placate customer demands.  The reason for this should be, but is not always obvious.  Oftentimes more substantive changes cannot be made because the product was built on an earlier generation operating environment or structure.  Thus, in order to replicate the additional functionality found by newer products the application requires a complete rewrite.  All of us operating in this industry has seen this; where a product that has been a mainstay for many years begins to lose market share.  The decision, when it is finally made, is to totally reengineer the solution, but not as an upgrade to the original product, arguing that it is a “new” product.  This is true in terms of the effort necessary to keep the solution viable, but that then also completely undermines justifications based on sunk costs.

c.  Flexibility.  As stated previously in this blog, the first generation of digitization mimicked those functions that were previously performed manually.  The applications were also segmented and specialized based on traditional line and staff organizations, and specialties.  Thus, for project management, we have scheduling applications for the scheduling discipline (such as it is), earned value engines for the EV discipline, risk and technical performance applications for risk specialists and systems engineers, analytical software for project and program analysts, and financial management applications that subsumed project and program management financial management professionals.  This led to the deployment of so-called best-of-breed configurations, where a smorgasbord of applications or modules were acquired to meet the requirements of the organization.  Most often these applications had and have no direct compatibility, requiring entire staffs to reconcile data after the fact once that data was imported into a proprietary format in which it could be handled.  Even within so-called ERP environments under one company, direct compatibility at the appropriate level of the data being handled escaped the ability of the software manufacturers, requiring “bolt-ons” and other workarounds and third party solutions.  This condition undermines sustainability, adds a level of complexity that is hard to overcome, and adds a layer of cost to the life-cycle of the solutions being deployed.

The second wave to address some of these limitations focused on data flexibility using cubes, hard-coding of relational data and mapping, and data mining solutions: so-called Project Portfolio Management (PPM) and Business Intelligence (BI).  The problem is that, in the first instance, PPM simply another layer to address management concerns, while early BI systems froze in time single points of failure into hard-coded deployed solutions.

A flexible system is one that leverages the new advances in software operating environments to solve more than one problem.  This, of course, undermines the financial returns in software, where the pattern has been to build one solution to address one problem based on a specialty.  Such a system provides internal flexibility, that is, allows for the application of objects and conditional formatting without hardcoding, pushing what previously had to be accomplished by coders to the customer’s administrator or user level; and external flexibility, where the same application can address, say, EVM, schedule, risk, financial management, KPIs, technical performance, stakeholder reporting, all in the same or in multiple deployed environments without the need for hardcoding.  In this case the operating environment and any augmented code provides a flexible environment to the customer that allows one solution to displace multiple “best-of-breed” applications.

This flexibility should apply not only vertically but also horizontally, where data can be hierarchically organized to allow not only for drill-down, but also for roll-up.  Data in this environment is exposed discretely, providing to any particular user that data, aggregated as appropriate, based on their role, responsibility, or need to know.

d.  Interoperability and open compatibility.  A condition of the “best-of-breed” deployment environment is that it allows for sub-optimization to trump organizational goals.  The most recent example that I have seen of this is where it is obvious that the Integrated Master Schedule (IMS) and Performance Management Baseline (PMB) were obviously authored by different teams in different locations and, most likely, were at war with one another when they published these essential interdependent project management artifacts.

But in terms of sustainability, the absence of interoperability and open compatibility has created untenable situations.  In the example of PMB and IMS information above, in many cases a team of personnel must be engaged every month to reconcile the obvious disconnectedness of schedule activities to control accounts in order to ensure traceability in project management and performance.  Surely, not only should there be no economic rewards for such behavior, I believe that no business would perform in that manner without them.

Thus, interoperability in this case is to be able to deal with data in its native format without proprietary barriers that prevent its full use and exploitation to the needs and demands of the customer organization.  Software that places its customers in a corner and ties their hands in using their own business information has, indeed, worn out its welcome.

The reaction of customer organizations to the software industry’s attempts to bind them to proprietary solutions has been most marked in the public sector, and most prominently in the U.S. Department of Defense.  In the late 1990s the first wave was to ensure that performance management data centered around earned value was submitted in a non-proprietary format known as the ANSI X12 839 transaction set.  Since that time DoD has specified the use of the UN/CEFACT XML D09B standard for cost and schedule information, and it appears that other, previously stove-piped data will be included in that standard in the future.  This solution requires data transfer, but it is one that ensures that the underlying data can be normalized regardless of the underlying source application.  It is especially useful for stakeholder reporting situations or data sharing in prime and sub-contractor relationships.

It is also useful for pushing for improvement in the disciplines themselves, driving professionalism.  For example, in today’s project management environment, while the underlying architecture of earned value management and risk data is fairly standard, reflecting a cohesiveness of practice among its practitioners, schedule data tends to be disorganized with much variability in how common elements are kept and reported.  This also reflects much of the state of the scheduling discipline, where an almost “anything goes” mentality seems to be in play reflecting not so much the realities of scheduling practice–which are pretty well established and uniform–as opposed to the lack of knowledge and professionalism on the part of schedulers, who are tied to the limitations and vagaries of their scheduling application of choice.

But, more directly, interoperability also includes the ability to access data (as opposed to application interfacing, data mining, hard-coded Cubes, and data transfer) regardless of the underlying database, application, and structured data source.  Early attempts to achieve interoperability and open compatibility utilized ODBC but newer operating environments now leverage improved OLE DB and other enhanced methods.  This ability, properly designed, also allows for the deployment of transactional environments, in which two-way communication is possible.

A new reality.  Thus given these new capabilities, I think that we are entering a new phase in software design and deployment, where the role of the coder in controlling the UI is reduced.  In addition, given that the large software companies have continued to support a system that ties customers to proprietary solutions, I do not believe that the future of software is in open source as so many prognosticators stated just a few short years ago.  Instead, I propose that applications that behave like open source but allow those who innovate and provide maximum value, sustainability, flexibility, and interoperability to the customer are those that will be rewarded for their efforts.

Note:  This post was edited for clarity and grammatical errors from the original.

 

Don’t Be Cruel — Contract Withholds and the Failure of Digital Systems

Recent news over at Breaking Defense headlined a $25.7M withhold to Pratt & Whitney for the F135 engine.  This is the engine for the F35 Lightning II aircraft, also known as the Joint Strike Fighter (JSF).  The reason for the withhold in this particular case was for an insufficient cost and schedule business system that the company has in place to support project management.

The enforcement of withholds for deficiencies in business systems was instituted in August 2011.  These business systems include six areas:

  • Accounting
  • Estimating
  • Purchasing
  • EVMS (Earned Value Management System)
  • MMAS (Material Management and Accounting System), and
  • Government Property

As of November 30, 2013, $19 million had been held back from BAE Systems Plc, $5.2 million from Boeing Co., and $1.4 million Northrop Corporation.  These were on the heels of a massive $221 million held back from Lockheed Martin’s aeronautics unit for its deficient earned value management system.  In total, fourteen companies were impacted by withholds last year.

For those unfamiliar with the issue, these withholds may seem to be a reasonable enforcement mechanism that sufficient business systems are in place in order to ensure that there is traceability in the expenditure of government funds by contractors.  After all, given the disastrous state of affairs where there was massive loss of accountability by contractors in Iraq and Afghanistan, many senior personnel in DoD felt that there needed to be teeth given to contracting officer, and what better way to do this than through financial withholds?  The rationale is that if the systems are not adequate then the information originated from these systems is not credible.

This is probably a good approach for the acquisition of wartime goods and services, but doesn’t seem to fit the reality of the project management environment in which government contracting operates.  The strongest objections to the rule, I think, came from the legal community, most notably from the Bar Association’s Section of Public Contract Law.  Among these was that the amount of the withhold is based on an arbitrary percentage within the DFARS rule.  Another point made is that the defects in the systems in most cases are disconnected from actual performance and so redirect attention and resources away from the contractual obligation at hand.

These objections were made prior to the rule’s acceptance.  But now that the rule is being enforced the more important question is the effect of the withholds on project management.  My own anecdotal experience from having been a business manager in a program management staff is that the key to project success is oftentimes determined by cash flow.  While internal factors to the project, such as the effective construction of the integrated master schedule (IMS), performance management baseline (PMB), risk identification and handling, and performance tracking against these plans are the primary focus of project integrity, all too often the underlying financial constraints in which the project must operate is treated as a contingent factor.  If our capabilities due to financial constraints are severe, then the best plan in the world will not achieve the desired results because it fails to be realistic.

The principles that apply to any entrepreneurial enterprise also apply to complex projects.  It is true that large companies do have significant cash reserves and that these reserves have grown significantly since the 2007-2010 depression.  But a major program requires a large infusion of resources that constitutes a significant barrier to entry, and so such reserves contribute to the financial stability necessary to undertake such efforts.  Profit is not realized on every project.  This may sound surprising to those unfamiliar with public administration, but this is the case because it sometimes is worth breaking even or taking a slight loss so as not to lose essential organizational knowledge.  It takes years to develop an engineer who understands the interrelationships of the key factors in a jet fighter: the tradeoffs between speed, maneuverability, weight, and stress from the operational environment, like taking off from and landing on a large metal aircraft carrier that travels on salt water.  This cannot be taught in college nor can it be replaced if the knowledge is lost due to budget cuts, pay freezes, and sequestration.  Oftentimes, because of their size and complexity, project start-up costs much be financed using short term loans, adding risk when payments are delayed and work interrupted.  The withhold rule adds an additional, if not unnecessary, dimension of risk to project success.

Given that most of the artifacts that are deemed necessary to handle and reduce risk are done in a collaborative environment by the contractor-government project team through the Integrated Baseline Review (IBR) process and system validation–as well as pre-award certifications–it seems that there is no clear line of demarcation to place the onus of inadequate business systems on the contractor.  The reality of the situation, given cost-plus contracts for development contracts, is that industry is, in fact, a private extension of the defense infrastructure.

It is true that a line must be drawn in the contractual relationship to provide those necessary checks and balances to guard against fraud, waste, or a race to the lowest common denominator that undermines accountability and execution of the contractual obligation.  But this does not mean that the work of oversight requires another post-hoc layer of surveillance.  If we are not getting quality results from pre-award and post-award processes, then we must identify which ones are failing us and reform them.  Interrupting cash flow for inadequately assessed business systems may simply be counter-productive.

As Deming would argue, quality must be built into the process.  What defines quality must also be consistent.  That our systems are failing in that regard is indicative, I believe, in a failure of imagination on the part of our digital systems, on which most business systems rely.  It was fine in the first wave of microcomputer digitization in the 1980s and 1990s to simply design systems that mimicked the structure of pre-digital information specialization.  Cost performance systems were built to serve the needs of cost analysts, scheduling systems were designed for schedulers, risk systems for a sub-culture of risk specialists, and so on.

To break these stovepipes the response of the IT industry twofold, which constitutes the second wave of digitization of project management business processes.

One was in many ways a step back.  The mainframe culture in IT had been on the defensive since the introduction of the PC and “distributed” processing.  Here was an opening to reclaim the high ground and so expensive, hard-coded ERP, PPM, and BI systems were introduced.  The lack of security in deploying systems quickly in the first wave also provided the community with a convenient stalking horse, though even the “new” systems, as we have seen, lack adequate security in the digital arms race.  The ERP and BI systems are expensive and require specialized knowledge and expertise.  Solutions are hard-coded, require detailed modeling, and take a significant amount of time to deploy, supporting a new generation of coders.  The significant financial and personnel resources required to acquire and implement these systems–and the management reputation on the line for making the decision to acquire the systems in the first place–have become a rationale for their continued use, even when they fail at the same high rate of all IT development projects.  Thus, tradeoff analysis between sunk costs and prospective costs is rarely made in determining their sustainability.

Another response was to knit together the first wave, specialized systems in “best-of-breed” configurations.  In this case data is imported and reconciled between specialized systems to achieve integration needed to service the cross-functional nature of project management.  Oftentimes the estimating, IMS, PMB, and qualitative and quantitative risk artifacts are constructed by separate specialists with little or no coordination or fidelity.  These environments are characterized by workarounds, special resource-heavy reconciliation teams dedicated to verifying data between systems, the expenditure of resources in fixing errors after the fact, and in the development of Access and MS Excel-heavy one-off solutions designed to address deficiencies in the underlying systems.

That the deficiencies that are found in the solutions described above are mimicked in the findings of the deficiencies in business systems marks the culprits largely being the underlying information systems.  The solution, I think, is going to come from those portions of the digital community where the barriers to entry are low.  The current crop of software in place is reaching the end of its productive life from the first and second waves.  Hoping to protect market share and stave off the inevitable, new delivery and business models are being deployed by entrenched software companies, who have little incentive to drive the industry to the next phase.  Instead, they have been marketing SaaS and cloud computing as the panacea, though the nature of the work tends to militate against acceptance of external hosting.  In the end, I believe the answer is to leverage new technologies that eliminate the specialized and hard-coded nature of the first example, but achieve integration, while leveraging the existing historical data that exists in great abundance from the second example.

Note: The title and some portions of this post were modified from the original for clarity.

 

 

 

What’s Your Number (More on Metrics)

Comments from my last post mainly centered around: but are you saying that we shouldn’t do assessments or analysis?  No.  But it is important to define our terms a bit better and realize that what we monitor is not equal and not measuring the same kind of thing.

As I have written here, our metrics fall into categories but each has a different role or nature and are generally rooted in two concepts.  These concepts are quality assurance (QA) and quality control (QC)–and they are not one and the same.  As our specialties have fallen away over time, the distinction between QA and QC has been lost.  The evidence of this confusion can be found not only in Wikipedia here and here, but also in project management discussion groups such as here and here.

QA measures the quality of the processes involved in the development and production of an end item.  It tends to be a proactive process and, therefore, looks for early warning indicators.

QC measures the quality in the products.  It is a reactive process and is focused on defect correction.

A large part of the confusion as it relates to project management is that QA and QC has its roots in iterative, production-focused activities.  So knowing which subsystems within the overall project management system we are measuring is important in understanding whether it serves a QA or QC purpose, that is, that is has a QA or QC effect.

Generally, in management, we categorize our metrics into groupings based on their purpose.  There are Key Performance Indicators (KPIs), which are categorized as diagnostic indicators, lagging indicators, and leading indicators.  There are Key Risk Indicators (KRIs), which measure future adverse impacts.  KRIs are qualitative and quantitative measures that must be handled or mitigated in our plans.

KPIs and KRIs can serve QA and QC purposes and it is important to know the difference so that we can understand what the metric is telling us.  The dichotomy between these effects is not closed.  QC is meant to drive improvements in our processes so that we can shift (ideally) to QA measures in ensuring that our processes will produce a high quality product.

When it comes to the measurement of project management artifacts, our metrics regarding artifact quality, such those applied to the Integrated Master Schedule (IMS) is actually a measure rooted in QC.  The defect has occurred in a product (the IMS) and now we must go back and fix it.  It is not that QC is not an essential function.

It just seems to me that we are sophisticated enough now to establish systems in the construction of the IMS and other artifacts, that is, to be proactive (avoiding errors), in lieu of being reactive (fixing errors).  And–yes–at the next meetings and conferences I will present some ideas on how to do that.