More on Excel…the contributing factor of poor Project Management apps

Some early comments via e-mails on my post on why Excel is not a PM tool raised the issue that I was being way too hard on IT shops and letting application providers off the hook.  The asymmetry was certainly not the intention (at least not consciously).

When approaching an organization seeking process and technology improvement, oftentimes the condition of using Excel is what we in the technology/PM industry conveniently call “workarounds.”  Ostensibly these workarounds are temporary measures to address a strategic or intrinsic organizational need that will eventually be addressed by a more cohesive software solution.  In all too many cases, however, the workaround turns out to be semi-permanent.

A case in point in basic project management concerns Work Authorizations Documents (WADs) and Baseline Change Requests (BCRs).  Throughout entire industries who use the most advanced scheduling applications, resource management applications, and–where necessary–earned value “engines,” the modus operandi to address WADs and BCRs is to either use Excel or to write a custom app in FoxPro or using Access.  This is fine as a “workaround” as long as you remember to set up the systems and procedures necessary to keep the logs updated, and then have in place a procedure to update the systems of record appropriately.  Needless to say, errors do creep in and in very dynamic environments it is difficult to ensure that these systems are in alignment, and so a labor-intensive feedback system must also be introduced.

This is the type of issue that software technology was designed to solve.  Instead, software has fenced off the “hard’ operations so that digitized manual solutions, oftentimes hidden from plain view from the team by the physical technological constraint of the computer (PC, laptop, etc.), are used.  This is barely a step above what we did before the introduction of digitization:  post the project plan, milestone achievements, and performance on a VIDS/MAF board that surrounded the PM control office, which ensured that every member of the team could see the role and progress of the project.  Under that system no one hoarded information, it militated against single points of failure, and ensured that disconnects were immediately addressed since visibility ensured accountability.

In many ways we have lost the ability to recreate the PM control office in digitized form.  Part of the reason resides in the 20th century organization of development and production into divisions of labor.  In project management, the specialization of disciplines organized themselves around particular functions: estimating and planning, schedule management, cost management, risk management, resource management, logistics, systems engineering, operational requirements, and financial management, among others.  Software was developed to address each of these areas with clear lines of demarcation drawn that approximated the points of separation among the disciplines.  What the software manufacturers forgot (or never knew) was that the PMO is the organizing entity and it is an interdisciplinary team.

To return to our example: WADs and BCRs; a survey of the leading planning and scheduling applications shows that while their marketing literature addresses baselines and baseline changes (and not all of them address even this basic function), they still do not understand complex project management.  There is a difference between resources assigned to a time-phased network schedule and the resources planned against technical achievement related to the work breakdown structure (WBS).  Given proper integration they should align.  In most cases they do not.  This is why most scheduling application manufacturers who claim to measure earned value, do not.  Their models assume that the expended resources align with the plan to date, in lieu of volume-based measurement.  Further, eventually understanding this concept does not produce a digitized solution, since an understanding of the other specific elements of program control is necessary.

For example, projects are initiated either through internal work authorizations in response to a market need, or based on the requirements of a contract.  Depending on the mix of competencies required to perform the work financial elements such as labor rates, overhead, G&A, allowable margin (depending on contract type), etc. will apply–what is euphemistically called “complex rates.”  An organization may need to manage multiple rate sets based on the types of efforts undertaken, with a many-to-many relationship between rate sets and projects/subprojects.

Once again, the task of establishing the proper relationships at the appropriate level is necessary.  This will then affect the timing of WAD initiation, and will have a direct bearing on the BCR approval process, given that it is heavily influenced by “what-if?” analysis against resource, labor, and financial availability and accountability (a complicated process in itself).  Thus the schedule network is not the only element affected, nor the overarching one, given the assessed impact on cost, technical achievement, and qualitative external risk.

These are but two examples of sub-optimization due to deficiencies in project management applications.  The response–and in my opinion a lazy one (or one based on the fact that oftentimes software companies know nothing of their customers’ operations)–has been to develop the alternative euphemism for “workaround”: best of breed.  Oftentimes this is simply a means of collecting revenue for a function that is missing from the core application.  It is the software equivalent of division of labor: each piece of software performs functions relating to specific disciplines and where there are gaps these are filled by niche solutions or Excel.  What this approach does not do is meet the requirements of the PMO control office, since it perpetuates application “swim lanes,” with the multidisciplinary requirements of project management relegated to manual interfaces and application data reconciliation.  It also pushes–and therefore magnifies–risk at the senior level of the project management team, effectively defeating organizational fail safes designed to reduce risk through, among other methods, delegation of responsibility to technical teams, and project planning and execution constructed around short duration/work-focused activities.  It also reduces productivity, information credibility, and unnecessarily increases cost–the exact opposite of the rationale used for investing in software technology.

It is time for this practice to end.  Technologies exist today to remove application “swim lanes” and address the multidisciplinary needs of successful project management.  Excel isn’t the answer; cross-application data access, proper data integration, and data processing into user-directed intelligence, properly aggregated and distributed based on role and optimum need to know, is.

Synchroncity — What is proper schedule and cost integration?

Much has been said about the achievement of schedule and cost integration (or lack thereof) in the project management community.  Much of it consists of hand waving and magic asterisks that hide the significant reconciliation that goes on behind the scenes.  From an intellectually honest approach that does not use the topic as a means of promoting a proprietary solution is that authored by Rasdorf and Abudayyeah back in 1991 entitled, “Cost and Schedule Control Integration: Issues and Needs.”

It is worthwhile revisiting this paper, I think, because it was authored in a world not yet fully automated, and so is immune to the software tool-specific promotion that oftentimes dominates the discussion.  In their paper they outlined several approaches to breaking down cost and work in project management in order to provide control and track performance.  One of the most promising methods that they identified at the time was the unified approach that had originated in aerospace, in which a work breakdown structure (WBS) is constructed based on discrete work packages in which budget and schedule are unified at a particular level of detail to allow for full control and traceability.

The concept of the WBS and its interrelationship to the organizational breakdown structure (OBS) has become much more sophisticated over the years, but there has been a barrier that has caused this ideal to be fully achieved.  Ironically it is the introduction of technology that is the culprit.

During the first phase of digitalization that occurred in the project management industry not too long after Radsdorf and Abudayyeah published their paper, there was a boom in dot coms.  For business and organizations the practice was to find a specialty or niche and fill it with an automated solution to take over the laborious tasks of calculation previously achieved by human intervention.  (I still have both my slide rule and first scientific calculator hidden away somewhere, though I have thankfully wiped square root tables from my memory).

For those of us who worked in project and acquisition management, our lives were built around the 20th century concept of division of labor.  In PM this meant we had cost analysts, schedule analysts, risk analysts, financial analysts and specialists, systems analysts, engineers broken down by subspecialties (electrical, mechanical, systems, aviation) and sub-subspecialties (Naval engineers, aviation, electronics and avionics, specific airframes, software, etc.).  As a result, the first phase of digitization followed the pathway of the existing specialties, finding niches in which to inhabit, which provided a good steady and secure living to software companies and developers.

For project controls, much of this infrastructure remains in place.  There are entire organizations today that will construct a schedule for a project using one set of specialists and the performance management baseline (PMB) in another, and then reconciling the two, not just in the initial phase of the project, but across the entire life of the project.  From the standard of the integrated structure that brings together cost and schedule this makes no practical sense.  From a business efficiency perspective this is an unnecessary cost.

As much as it is cited by many authors and speakers, the Coopers & Lybrand with TASC, Inc. paper entitled “The DoD Regulatory Cost Premium” is impossible to find on-line.  Despite its widespread citation the study demonstrated that by the time one got down to the third “cost” driver due to regulatory requirements that the projected “savings” was a fraction of 1% of the total contract cost.  The interesting issue not faced by the study is, were the tables turned, how much would such contracts be reduced if all management controls in the company were reduced or eliminated since they contribute as elements to overhead and G&A?  More to the point here, if the processes applied by industry were optimized what would the be the cost savings involved?

A study conduct by RAND Corporation in 2006 accurately points out that a number of studies had been conducted since 1986, all of which promised significant impacts in terms of cost savings by focusing on what were perceived as drivers for unnecessary costs.  The Department of Defense and the military services in particular took the Coopers & Lybrand study very seriously because of its methodology, but achieved minimal savings against those promised.  Of course, the various studies do not clearly articulate the cost risk associated with removing the marginal cost of oversight and regulation. Given our renewed experience with lack of regulation in the mortgage and financial management sectors of the economy that brought about the worst economic and financial collapse since 1929, one my look at these various studies in a new light.

The RAND study outlines the difficulties in the methodologies and conclusions of the studies undertaken, especially the acquisition reforms initiated by DoD and the military services as a result of the Coopers & Lybrand study.  But, how, you may ask does this relate to cost and schedule integration?

The present means that industry uses in many places takes a sub-optimized approach to project management, particularly when it applies to cost and schedule integration, which really consists of physical cost and schedule reconciliation.  A system is split into two separate entities, though they are clearly one entity, constructed separately, and then adjusted using manual intervention which defeats the purpose of automation.  This may be common practice but it is not best practice.

Government policy, which has pushed compliance to the contractor, oftentimes rewards this sub-optimization and provides little incentive to change the status quo.  Software industry manufacturers who are embedded with old technologies are all too willing to promote the status quo–appropriating the term “integration” while, in reality, offering interfaces and workarounds after the fact.  Those personnel residing in line and staff positions defined by the mid-20th century approach of division of labor are all too happy to continue operating using outmoded methods and tools.  Paradoxically these are personnel in industry that would never advocate using outmoded airframes, jet engines, avionics, or ship types.

So it is time to stop rewarding sub-optimization.  The first step in doing this is through the normalization of data from these niche proprietary applications and “rewiring” them at the proper level of integration so that the systemic faults can be viewed by all stakeholders in the oversight and regulatory chain.  Nothing seems to be more effective in correcting a hidden defect than some sunshine and a fresh set of eyes.

If industry and government are truly serious about reforming acquisition and project management in order to achieve significant cost savings in the face of tight budgets and increasing commitments due to geopolitical instability, then systemic reforms from the bottom up are the means to achieve the goal; not the elimination of controls.  As John Kennedy once said in paraphrasing Chesterton, “Don’t take down a fence unless you know why it was put up.”  The key is not to undermine the strength and integrity of the WBS-based approach to project control and performance measurement (or to eliminate it), but to streamline it so that it achieves its ideal as closely as our inherently faulty tools and methods will allow.

 

I’ve Got Your Number — Types of Project Measurement and Services Contracts

Glen Alleman reminds us at his blog that we measure things for a reason and that they include three general types: measures of effectiveness, measures of performance, and key performance parameters.

Understanding the difference between these types of measurement is key, I think, to defining what we mean by such terms as integrated project management and in understanding the significance of differing project and contract management approaches based on industry and contract type.

For example, project management focused on commodities, with their price volatility, emphasizes schedule and resource management. Cost performance (earned value) where it exists, is measured by time in lieu of volume- or value-based performance. I have often been engaged in testy conversations where those involved in commodity-based PM insist that they have been using Earned Value Management (EVM) for as long as the U.S.-based aerospace and defense industry (though the methodology was borne in the latter). But when one scratches the surface the approaches in the details on how value and performance is determined is markedly different–and so it should be given the different business environments in which enterprises in each of these industries operate.

So what is the difference in these measures? In borrowing from Glen’s categories, I would like to posit a simple definitional model as follows:

Measures of Excellence – are qualitative measures of achievement against the goals in the project;

Measures of Performance – are quantitative measures against a plan or baseline in execution of the project plan.

Key Performance Parameters – are the minimally acceptable thresholds of achievement in the project or effort.

As you may guess there is sometimes overlap and confusion regarding which category a particular measurement falls. This confusion has been exacerbated by efforts to define key performance indicators (KPIs) based on industry, giving the impression that measures are exclusive to a particular activity. While this is sometimes the case it is not always the case.

So when we talk of integrated project management we are not accepting that any particular method of measurement has primacy over the others, nor subsumes them. Earned Value Management (EVM) and schedule performance are clearly performance measures. Qualitative measures oftentimes measure achievement of technical aspects of the end item application being produced. This is not the same as technical performance measurement (TPM), which measures technical achievement against a plan–a performance measure. Technical achievement may inform our performance measurement systems–and it is best if it does. It may also inform our Key Performance Parameters since exceeding a minimally acceptable threshold obviously helps us to determine success or failure in the end. The difference is the method of measurement. In a truly integrated system the measurement of one element informs the others. For the moment these systems presently tend to be stove-piped.

It becomes clear, then, that the variation in approaches differs by industry, as in the example on EVM above, and–in an example that I have seen most recently–by contract type. This insight is particularly important because all too often EVM is viewed as being synonymous with performance measurement, which it is not. Services contracts require structure in measurement as much as R&D-focused production contracts, particularly because they increasingly take up a large part of an enterprise’s resources. But EVM may not be appropriate.

So for our notional example, let us say that we are responsible for managing an entity’s IT support organization. There are types of equipment (PCs, tablet computers, smartphones, etc.) that must be kept operational based on the importance of the end user. These items of hardware use firmware and software that must be updated and managed. Our contract establishes minimal operational parameters that allow us to determine if we are at least meeting the basic requirements and will not be terminated for cause. The contract also provides incentives to encourage us to exceed the minimums.

The sites we support are geographically dispersed. We have to maintain a help desk but also must have people who can come onsite and provide direct labor to setup new systems or fix existing ones–and that the sites and personnel must be supported within a particular time-frame: one hour, two hours, and within twenty-four hours, etc.

In setting up our measurement systems the standard practice is to start with the key performance parameters. Typically we will also measure response times by site and personnel level, record our help desk calls, and track qualitative aspects of the work: How helpful is the help desk? Do calls get answered at the first contract? Are our personnel friendly and courteous? What kinds of hardware and software problems do we encounter? We collect our data from a variety of one-off and specialized sources and then we generate reports from these systems. Many times we will focus on those that will allow us to determine if the incentive will be paid.

Among all of this data we may be able to discern certain things: if the contract is costing more or less than we anticipated, if we are fulfilling our contractual obligations, if our personnel pools are growing or shrinking, if we are good at what we do on a day-to-day basis, and if it looks as if our margin will be met. But what these systems do not do is allow us to operate the organization as a project, nor do they allow us to make adjustments in a timely manner.

Only through integration and aggregation can we know, for example, how the demand for certain services is affecting our resource demands by geographical location and level of service, on a real=time basis where we need to make adjustments in personnel and training, whether we are losing or achieving our margin by location, labor type, equipment type, hardware vs. software; our balance sheets (by location, by equipment type, by software type, etc.), if there is a learning curve, and whether we can make intermediate adjustments to achieve the incentive thresholds before the result is written in stone. Having this information also allows us to manage expectations, factually inform perceptions, and improve customer relations.

What is clear by this example is that “not doing EVM” does not make measurement easy, nor does it imply simplification, nor the absence of measurement. Instead, understanding the nature of the work allows us to identify those measures within their proper category that need to be applied by contract type and/or industry. So while EVM may not apply to services contracts, we know that certain new aggregations do apply.

For many years we have intuitively known that construction and maintenance efforts are more schedule-focused, that oil and gas exploration more resource- and risk-focused, and that aircraft, satellites, and ships more performance-focused. I would posit that now is the time for us to quantify and formalize the commonalities and differences. This also makes an integrated approach not simply a “nice to have” capability, but an essential capability in managing our enterprises and the projects within them.

Note: This post was updated to correct grammatical errors.

I Can’t Get No (Satisfaction) — When Software Tools Go Bad

Another article I came across a couple of weeks ago that my schedule prevented me from highlighting was by Michelle Symonds at PM Hut entitled “5 Tell-Tale Signs That You Need a Better Project Management Tool.”  According to Ms. Symonds, among these signs are:

a.  Additional tools are needed to achieve the intended functionality apart from the core application;

b.  Technical support is poor or nonexistent;

c.  Personnel in the organization still rely on spreadsheets to extend the functionality of the application;

d.  Training on the tool takes more time than training the job;

e.  The software tool adds work instead of augmenting or facilitating the achievement of work.

I have seen situations where all of these conditions are at work but the response, in too many cases, has been “well we put so much money into XYZ tool with workarounds and ‘bolt-ons’ that it will be too expensive/disruptive to change.”  As we have advanced past the first phases of digitization of data, it seems that we are experiencing a period where older systems do not quite match up with current needs, but that software manufacturers are very good at making their products “sticky,” even when their upgrades and enhancements are window dressing at best.

In addition, the project management community, particularly focused on large projects in excess of $20M is facing the challenge of an increasingly older workforce.  Larger economic forces at play lately have exacerbated this condition.  Aggregate demand and, on the public side, austerity ideology combined with sequestration, has created a situation where highly qualified people are facing a job market characterized by relatively high unemployment, flat wages and salaries, depleted private retirement funds, and constant attacks on social insurance related to retirement.  Thus, people are hanging around longer, which limits opportunities for newer workers to grow into the discipline.  Given these conditions, we find that it is very risky to one’s employment prospects to suddenly forge a new path.  People in the industry that I have known for many years–and who were always the first to engage with new technologies and capabilities–are now very hesitant to do so now.  Some of this is well founded through experience and consists of healthy skepticism: we all have come across snake oil salesmen in our dealings at one time or another, and even the best products do not always make it due to external forces or the fact that brilliant technical people oftentimes are just not very good at business.

But these conditions also tend to hold back the ability of the enterprise to implement efficiencies and optimization measures that otherwise would be augmented and supported by appropriate technology.  Thus, in addition to those listed by Ms. Symonds, I would include the following criteria to use in making the decision to move to a better technology:

a.  Sunk and prospective costs.  Understand and apply the concepts of sunk cost and prospective cost.  The first is the cost that has been expended in the past, while the latter focuses on the investment necessary for future growth, efficiencies, productivity, and optimization.  Having made investments to improve a product in the past is not an argument for continuing to invest in the product in the future that trumps other factors.  Obviously, if the cash flow is not there an organization is going to be limited in the capital and other improvements it can make but, absent those considerations, sunk cost arguments are invalid.  It is important to invest in those future products that will facilitate the organization achieving its goals in the next five or ten years.

b.  Sustainability.  The effective life of the product must be understood, particularly as it applies to an organization’s needs.  Some of this overlaps the points made by Ms. Symonds in her article but is meant to apply in a more strategic way.  Every product, even software, has a limited productive life but my concept here goes to what Glen Alleman pointed out in his blog as “bounded applicability.”  Will the product require more effort in any form where the additional effort provides a diminishing return?  For example, I have seen cases where software manufacturers, in order to defend market share, make trivial enhancements such as adding a chart or graph in order to placate customer demands.  The reason for this should be, but is not always obvious.  Oftentimes more substantive changes cannot be made because the product was built on an earlier generation operating environment or structure.  Thus, in order to replicate the additional functionality found by newer products the application requires a complete rewrite.  All of us operating in this industry has seen this; where a product that has been a mainstay for many years begins to lose market share.  The decision, when it is finally made, is to totally reengineer the solution, but not as an upgrade to the original product, arguing that it is a “new” product.  This is true in terms of the effort necessary to keep the solution viable, but that then also completely undermines justifications based on sunk costs.

c.  Flexibility.  As stated previously in this blog, the first generation of digitization mimicked those functions that were previously performed manually.  The applications were also segmented and specialized based on traditional line and staff organizations, and specialties.  Thus, for project management, we have scheduling applications for the scheduling discipline (such as it is), earned value engines for the EV discipline, risk and technical performance applications for risk specialists and systems engineers, analytical software for project and program analysts, and financial management applications that subsumed project and program management financial management professionals.  This led to the deployment of so-called best-of-breed configurations, where a smorgasbord of applications or modules were acquired to meet the requirements of the organization.  Most often these applications had and have no direct compatibility, requiring entire staffs to reconcile data after the fact once that data was imported into a proprietary format in which it could be handled.  Even within so-called ERP environments under one company, direct compatibility at the appropriate level of the data being handled escaped the ability of the software manufacturers, requiring “bolt-ons” and other workarounds and third party solutions.  This condition undermines sustainability, adds a level of complexity that is hard to overcome, and adds a layer of cost to the life-cycle of the solutions being deployed.

The second wave to address some of these limitations focused on data flexibility using cubes, hard-coding of relational data and mapping, and data mining solutions: so-called Project Portfolio Management (PPM) and Business Intelligence (BI).  The problem is that, in the first instance, PPM simply another layer to address management concerns, while early BI systems froze in time single points of failure into hard-coded deployed solutions.

A flexible system is one that leverages the new advances in software operating environments to solve more than one problem.  This, of course, undermines the financial returns in software, where the pattern has been to build one solution to address one problem based on a specialty.  Such a system provides internal flexibility, that is, allows for the application of objects and conditional formatting without hardcoding, pushing what previously had to be accomplished by coders to the customer’s administrator or user level; and external flexibility, where the same application can address, say, EVM, schedule, risk, financial management, KPIs, technical performance, stakeholder reporting, all in the same or in multiple deployed environments without the need for hardcoding.  In this case the operating environment and any augmented code provides a flexible environment to the customer that allows one solution to displace multiple “best-of-breed” applications.

This flexibility should apply not only vertically but also horizontally, where data can be hierarchically organized to allow not only for drill-down, but also for roll-up.  Data in this environment is exposed discretely, providing to any particular user that data, aggregated as appropriate, based on their role, responsibility, or need to know.

d.  Interoperability and open compatibility.  A condition of the “best-of-breed” deployment environment is that it allows for sub-optimization to trump organizational goals.  The most recent example that I have seen of this is where it is obvious that the Integrated Master Schedule (IMS) and Performance Management Baseline (PMB) were obviously authored by different teams in different locations and, most likely, were at war with one another when they published these essential interdependent project management artifacts.

But in terms of sustainability, the absence of interoperability and open compatibility has created untenable situations.  In the example of PMB and IMS information above, in many cases a team of personnel must be engaged every month to reconcile the obvious disconnectedness of schedule activities to control accounts in order to ensure traceability in project management and performance.  Surely, not only should there be no economic rewards for such behavior, I believe that no business would perform in that manner without them.

Thus, interoperability in this case is to be able to deal with data in its native format without proprietary barriers that prevent its full use and exploitation to the needs and demands of the customer organization.  Software that places its customers in a corner and ties their hands in using their own business information has, indeed, worn out its welcome.

The reaction of customer organizations to the software industry’s attempts to bind them to proprietary solutions has been most marked in the public sector, and most prominently in the U.S. Department of Defense.  In the late 1990s the first wave was to ensure that performance management data centered around earned value was submitted in a non-proprietary format known as the ANSI X12 839 transaction set.  Since that time DoD has specified the use of the UN/CEFACT XML D09B standard for cost and schedule information, and it appears that other, previously stove-piped data will be included in that standard in the future.  This solution requires data transfer, but it is one that ensures that the underlying data can be normalized regardless of the underlying source application.  It is especially useful for stakeholder reporting situations or data sharing in prime and sub-contractor relationships.

It is also useful for pushing for improvement in the disciplines themselves, driving professionalism.  For example, in today’s project management environment, while the underlying architecture of earned value management and risk data is fairly standard, reflecting a cohesiveness of practice among its practitioners, schedule data tends to be disorganized with much variability in how common elements are kept and reported.  This also reflects much of the state of the scheduling discipline, where an almost “anything goes” mentality seems to be in play reflecting not so much the realities of scheduling practice–which are pretty well established and uniform–as opposed to the lack of knowledge and professionalism on the part of schedulers, who are tied to the limitations and vagaries of their scheduling application of choice.

But, more directly, interoperability also includes the ability to access data (as opposed to application interfacing, data mining, hard-coded Cubes, and data transfer) regardless of the underlying database, application, and structured data source.  Early attempts to achieve interoperability and open compatibility utilized ODBC but newer operating environments now leverage improved OLE DB and other enhanced methods.  This ability, properly designed, also allows for the deployment of transactional environments, in which two-way communication is possible.

A new reality.  Thus given these new capabilities, I think that we are entering a new phase in software design and deployment, where the role of the coder in controlling the UI is reduced.  In addition, given that the large software companies have continued to support a system that ties customers to proprietary solutions, I do not believe that the future of software is in open source as so many prognosticators stated just a few short years ago.  Instead, I propose that applications that behave like open source but allow those who innovate and provide maximum value, sustainability, flexibility, and interoperability to the customer are those that will be rewarded for their efforts.

Note:  This post was edited for clarity and grammatical errors from the original.

 

The Times They Are A-Changin’–Should PMI Be a Project Management Authority?

Back from a pretty intense three weeks taking care of customers (yes–I have those) and attending professional meetings and conferences.  Some interesting developments regarding the latter that I will be writing about here, but while I was in transit I did have the opportunity to keep up with some interesting discussions within the project management community.

Central among those was an article by Anonymous on PM Hut that appeared a few weeks ago that posited the opinion that PMI Should No Longer Be an Authority on Project Management.  I don’t know why the author of the post decided that they had to remain anonymous.  I learned some time ago that one should not only state their opinion in as forceful terms as possible (backed up with facts), but to own that opinion and be open to the possibility that it could be wrong or require modification.  As stated previously in my posts, project management in any form is not received wisdom.

The author of the post makes several assertions summarized below:

a. That PMI, though ostensibly a not-for-profit organization, behaves as a for-profit organization, and aggressively so.

b.  The Project Management Body of Knowledge (PMBOK®) fails in its goal of being the definitive source for project management because it lacks continuity between versions, its prescriptions lack realism, and, particularly in regard to software project management, that this section has morphed into a hybrid of Waterfall and Agile methodology.

c.  The PMI certifications lack credibility and seem to be geared to what will sell, as opposed to what can be established as a bonafide discipline.

I would have preferred that the author had provided more concrete examples of these assertions, given their severity.  For example, going to the on-line financial statements of the organization, PMI does have a significant staff of paid personnel and directors, with total assets as of 2012 of over $300M.  Of this, about $267M is in investments.  It’s total revenue that year was $173M.  It spent only $115M from its cashflow on its programs and another $4M on governance and executive management compensation.  Thus, it would appear that the non-profit basis of the organization has significantly deviated from its origins at the Georgia Institute of Technology.  Project management is indeed big business with vesting and compensation of over $1M going to the President & CEO of the organization in 2012 alone.  Thus there does seem to be more than a little justification for the first of the author’s criticisms.

I also share in the author’s other concerns, but a complete analysis is not available regarding either the true value of the PMBOK® and the value of a PMP certification.  I have met many colleagues who felt the need to obtain the latter, despite their significant practical achievements and academic credentials.  I have also met quite a few people with “PMP” after their names whose expertise is questionable, at best.  I am reminded of the certifications given by PMI and other PM organizations today to a very similar condition several years ago when the gold standard of credentials in certain parts of the IT profession were the Certified Novell Engineer (CNE), and Microsoft Certified Solutions Expert (MCSE) certifications.  They still exist in some form.  What was apparent as I took the courses and the examinations was that the majority of my fellow students had never set up a network.  They were, to use the pejorative among the more experienced members among us, “Paper CNEs and MCSEs.”  In interviewing personnel with “PMP” after their name I find a wide variation in expertise, thus the quality of experience with supporting education tends to have more influence with me than some credential from one of the PM organizations.

Related to this larger issue of what constitutes a proper credential in our discipline, I came across an announcement by Dave Gordon at his The Practicing IT Project Manager blog of a Project Management Job Requirements study.  Dave references this study by Noel Radley of SoftwareAdvise.com that states that the PMP is preferred or specified by 79% of the 300 jobs used as the representative baseline for the industries studied.  Interestingly, the study showed that advanced education is rarely required or preferred.

I suspect that this correlates in a negative way with many of the results that we have seen in the project management community.  Basic economics dictates that people with advanced degrees (M.A. and M.B.A. grads) do come with a higher price than those who only have Baccalaureate degrees, their incomes rising much more than 4 year college grads.  It seems that businesses do not value that additional investment except by exception.

Additionally, I have seen the results of two studies presented in government forums over the past six months (but alas no links yet) where the biggest risk to the project was identified to be the project manager.  Combined with the consistent failure reported by widely disparate sources of the overwhelming majority of projects to perform within budget and be delivered on time raises the natural question as to whether those that we choose to be project managers have the essential background to perform the job.

There seems to be a widely held myth that formal education is somehow unnecessary to develop a project manager–relegating what at least masquerades as a “profession”–to the level of a technician or mechanic.  It is not that we do not need technicians or mechanics, it is that higher level skills are needed to be a successful project manager.

This myth seems to be spreading, and to have originated from the society as a whole, where the emphasis is on basic skills, constant testing, the elimination of higher level thinking, and a narrowing of the curriculum.  Furthermore, college education, which was widely available to post-World War II generations well into the 1980s, is quickly becoming unaffordable by a larger segment of the population.  Thus, what we are seeing is a significant skills gap in the project management discipline to add to one that already has had an adverse impact on the ability of both government and industry to succeed.  For example, a paper from Calleam Consulting Ltd in a paper entitled “The Story Behind the High Failure Rates in the IT Sector” found that “17 percent of large IT projects go so badly that they can threaten the very existence of the company.”

From my experiences over the last 30+ years, when looking for a good CTO or CIO I will look to practical and technical experience and expertise with the ability to work with a team.  For an outstanding coder I look for a commitment to achieve results and elegance in the final product.  But for a good PM give me someone with a good liberal arts education with some graduate level business or systems work combined with leadership.  Leadership includes all of the positive traits one demands of this ability: honesty, integrity, ethical behavior, effective personnel management, commitment, and vision.

The wave of the future in developing our expertise in project management will be the ability to look at all of the performance characteristics of the project and its place in the organization.  This is what I see as the real meaning of “Integrated Project Management.”  I have attended several events since the beginning of the year focused on the project management discipline in which assertions were made that “EVM is the basis for integrated project management” or “risk is the basis for integrated project management” or “schedule is the basis for integrated project management.”  The speakers did not seem to acknowledge that the specialty that they were addressing is but one aspect of measuring project performance, and even less of a factor in measuring program performance.

I believe that this is a symptom of excess specialization and lack of a truly professional standard in project management.  I believe that if we continue to hire technicians with expertise in one area, possessing a general certification that simply requires one to attend conferences and sit in courses that lack educational accreditation and claim credit for “working within” a project, we will find that making the transition to the next evolutionary step at the PM level will be increasingly difficult.  Finally, for the anonymous author critical of PMI it seems that project management is a good business for those who make up credentials but not such a good deal for those with a financial stake in project management.

Note:  This post has been modified to correct minor grammatical and spelling errors.

Full disclosure:  The author has been a member of PMI for almost 20 years, and is a current member and former board member of the College of Performance Management (CPM).

I Can See Clearly Now (The Risk Is Gone) — Managing and Denying Risk in PM

Just returned from attending the National Defense Industrial Association’s Integrated Program Management Division (NDIA IPMD) quarterly meeting.  This is an opportunity for both industry and government to share common concerns and issues regarding program management, as well as share expertise and lessons learned.

This is one among a number of such forums that distinguishes the culture in aerospace and defense in comparison to other industry verticals.  For example, in the oil and gas industry the rule of thumb is to not share such expertise across the industry, except in very general terms through venues such as the Project Management Institute, since the information is considered proprietary and competition sensitive.  I think, as a result, that the PM discipline suffers for this lack of cross pollination of ideas, resulting in an environment where, in IT infrastructure, the approach is toward customization and stovepipes, resulting in a situation where solutions tend to be expensive, marked by technological dead ends and single point failures, a high rate of IT project failure, and increased expense.

Among a very distinguished venue of project management specialists, one of the presentations that really impressed me by its refreshingly candid approach was given by Dave Burgess of the U. S. Navy Naval Air Systems Command (NAVAIR) entitled “Integrated Project Management: ‘A View from the Front Line’.”  The charts from his presentation will be posted on the site (link in the text on the first line).  Among the main points that I took from his presentation are:

a.  The time from development to production of an aircraft has increased significantly from the 1990s.  The reason for this condition is implicit in the way that PM is executed.  More on this below in items d and e.

b.  FY 2015 promises an extremely tight budget outlook for DoD.  From my view given his chart it is almost as if 2015 is the budgetary year that Congress forgot.  Supplemental budgets somewhat make up for the shortfalls prior to and after FY 2015, but the next FY is the year that the austerity deficit-hawk pigeons come home to roost.  From a PM perspective this represents a challenge to program continuity and sustainability.  It forces choices within program that may leave the program manager with a choice of the lesser of two evils.

c.  Beyond the standard metrics provided by earned value management that it is necessary for program and project managers to identify risks, which requires leading indicators to inform future progress.

This is especially important given the external factors of items a and b above.  Among his specific examples, Mr. Burgess demonstrated the need for integration of schedule and cost in the development of leading indicators.  Note that I put schedule ahead of cost in interpreting his data, and in looking at his specific examples there was an undeniable emphasis on the way in which schedule drives performance, given that it is a measure of the work that needs to be accomplished with (hopefully) an assessment of the resources necessary to accomplish the tasks in that work.  For example, Mr. Burgess demonstrated the use of bow waves to illustrate that the cumulative scope of the effort as the program ramps over time up will overcome the plan if execution is poor.  This is a much a law of physics as any mathematical proof.  No sky-hooks exist in real life.

From my perspective in PM, cost is a function of schedule.  All too often I have seen cases where the project management baseline (PMB) is developed apart from and poorly informed by the integrated master schedule (IMS).  This is not only foolhardy it is wrong.  The illogic of doing so should be self-evident but the practice persists.  It mostly exists because of the technological constraints imposed by the PM IT systems being stovepiped, which then drive both practice and the perception of what industry views is possible.

Thus, this is not an argument in favor of the status quo, it is, instead, an argument to dump the IT tool vendor refusing to update their products whose only interest is to protect market share and keep their proprietary solution sticky.  The concepts of sunk costs vs. prospective costs are useful in this discussion.  Given the reality of the tight fiscal environment in place and greater constraints to come, the program and project manager is facing the choice of paying recurring expenses for outdated technologies to support their management systems, or selecting and deploying new ones that will reduce their overheads and provide better and quicker information.  This allows them to keep people who, despite the economic legend that robots are taking our jobs, still need to make decisions in effectively managing the program and project.  It takes a long time and a lot of money to develop an individual with the skills necessary to manage a complex project of the size discussed by Mr. Burgess, while software technology generations average two years.  I’d go with keeping the people and seeking new, innovative technologies on a regular basis, since the former will always be hard and expensive (if done right) and the latter for the foreseeable future will continue on a downward cost slope.  I’ll expand on this in a later post.

d.  There is a self-reinforcing dysfunctional systemic problem that contributes to the condition described in item “a,” which is the disconnect between most likely estimates of the cost of a system and the penchant for the acquisition system to award based on a technically acceptable approach that is the lowest bid.  This encourages unrealistic expectations in forming the plan once the contract is awarded, which eventually is modified, through various change rationales, that tend to bring the total scope back to the original internal most-likely estimate.  Thus, Procuring Contracting Officers (PCOs) are allowing contractors to buy-in, a condition contrary to contracting guidance, and it is adversely affecting both budget and program planning.

e.  That all too often program managers spend time denying risk in lieu of managing risk.  By denying risk, program and project managers focus on a few elements of performance that they believe give them an indication of how their efforts are performing.  This perception is reinforced by the limited scope of the information looked at by senior personnel in the organization in their reports.  It is then no surprise that there are “surprises” when reality catches up with the manager.

It is useful to note the difference between program and project management in the context of the A&D vertical.  Quite simply, in this context, a program manager is responsible for all of the elements of the system being deployed.  For the U.S. Navy this includes the entire life-cycle of the system, including the logistics and sustainment after deployment.  Project management in this case includes one element of the system; for example, development and production of a radar, though there are other elements of the program in addition to the radar.  My earlier posts on the ACA program–as opposed to the healthcare.gov site–is another apt example of these concepts in practice.

Thus, program managers, in particular, need information on all of the risks before them.  This would include not only cost and schedule risk, which I would view as project management level indicators, but also financial and technical risk at the program level.  Given the discussions this past week, it is apparent that our more familiar indicators, while useful, require a more holistic set of views that both expand and extend our horizon, while keeping that information “actionable.”  This means that our IT systems used to manage our business systems require more flexibility and interoperability in supporting the needs of the community.

 

 

You Can’t Always Get What You Want — Requirements and Rubber Baselines

I’ve received some nice feedback from my post “Guvmint Stuff.”  One of the e-mails I received was from my colleague, Dan Zosh, with whom I worked previously at a very successful software company that was absorbed by a much larger company.  Dan makes a good point regarding the usefulness of EVM and why it is viewed by so much of the Govcon project management community as a reporting as opposed to project management tool:  the instability of program requirements and the PMB.

Years ago when I worked on Navy staff in D.C. my immediate military boss would lecture the requirements types on the fact that we live in a world of finite resources and that they couldn’t always get what they wanted–and certainly that the wrong time to start rethinking things was when the program was nearing the end of technical development and had entered its final Preliminary Design Review (PDR).

I had applied this same principle when a PM of software projects on two separate occasions.  Had I allowed each review of the internal target customer base to adjust what “done” meant we would have never produced a solution to address a particular set of needs.  “We are pushing for the baseline functionality in version 1.0 and revisions will be made in 1.1.” was my response.  That was because I was aware of the fact that a very high percentage of software development projects in both private industry and government fail.  In fact, one group looking at the history of 50,000 commercial and government software projects has found that of 3,555 projects between 2003 and 2012, only 6.4% were successful, with 52% challenged–meaning that they either didn’t meet expectations or ran over budget and/or schedule, and 41.4% completely failed.  This is in line with what we had known to be true 20 years ago.

The reason for this is that we must know where we are going and what “done” means.  My colleague Glen Alleman has been waging his own dialogue with the #noestimates crowd and the cultists from the far edges of the Agile side who say that you do not have to assess progress because you don’t have an estimate, what with two week sprints and all that.  But my own experience with Agile proves that there is an overall plan and, yes, even if you don’t like it your customer doesn’t have unlimited resources and really expects that you come in within budget and the allotted time.

There is much more in common in IT and A&D than both disciplines would like to admit.  After all, A&D is quite heavy in software development and deployment, which is essential to most modern systems.  No doubt there are good reasons why requirements change, but the recent trend has been the shifting baseline, where variances are lost as the project is reset to some other definition of “done,” each revision contributing to cost growth.

In that environment EVM then really does only become a reporting tool viewed as an inconvenient historical document that records management by contingency.  Is this, then, a project management problem or a contract management problem?

I would posit that this is a failure of control mostly on the contract management side of the ledger.  Mr. Alleman’s discipline can be applied perfectly but as long as the underlying project space definition is under constant revision, his measurements will continue to shift.  Some on the Agile side seem to have that very agenda in mind–change the goal posts and one never has to be accountable.

While it makes eminent sense to trade off new technologies for old when development proves that there are more economical pathways to achieving the technical goals of the project, this is rarely the case.  Contracting officers routinely allow buy-ins to contracts knowing that the performance specification may be weak, and so down the line the contract will be modified significantly after award.  Under R&D cost-plus contracts this is somewhat baked into the nature of the contract type and the efforts being undertaken–which is the reason for the periodic reviews–but this also leads to abuses such as zeroing variances as tradeoffs between control accounts, rubber baselines, and harvesting contracts for funding.  These practices would not occur without the approval of the contracting officer, but it is a fine line between allowing internal adjustments within scope and changing the definition of the scope.

The issue, then, is a systemic one.  The response, then, should be a multilateral, multidisciplinary approach to contract and project management.  This includes stronger discipline in financial management that ensures that designated funds are being used for the scope intended, realistic estimates, well written performance specifications, and a contracting corps that understands that variances are never zeroed, contracts never harvested since the funds belong to the budget holders but, more importantly, understanding that the less expensive alternative proposal in which the technical approach to the performance specification is not fully fleshed out and is significantly below the most realistic, risk-adjusted estimate, is out of the competitive range.