The Revolution Will Not Be Televised — The Sustainability Manifesto for Projects

While doing stuff and living life (which seems to take me away from writing) there were a good many interesting things written on project management.  The very insightful Dave Gordon at his blog, The Practicing IT Project Manager, provides a useful weekly list of the latest contributions to the literature that are of note.  If you haven’t checked it out please do so–I recommend it highly.

While I was away Dave posted to an interesting link on the concept of sustainability in project management.  Along those lines three PM professionals have proposed a Sustainability Manifesto for Projects.  As Dave points out in his own post on the topic, it rests on three basic principles:

  • Benefits realization over metrics limited to time, scope, and cost
  • Value for many over value of money
  • The long-term impact of our projects over their immediate results

These are worthy goals and no one needs to have me rain on their parade.  I would like to see these ethical principles, which is what they really are, incorporated into how we all conduct ourselves in business.  But then there is reality–the “is” over the “ought.”

For example, Dave and I have had some correspondence regarding the nature of the marketplace in which we operate through this blog.  Some time ago I wrote a series of posts here, here, and here providing an analysis of the markets in which we operate both in macroeconomic and microeconomic terms.

This came in response to one my colleagues making the counterfactual assertion that we operate in a “free market” based on the concept of “private enterprise.”  Apparently, such just-so stories are lies we have to tell ourselves to make the hypocrisy of daily life bearable.  But, to bring the point home, in talking about the concept of sustainability, what concrete measures will the authors of the manifesto bring to the table to counter the financialization of American business that has occurred of the past 35 years?

For example, the news lately has been replete with stories of companies moving plants from the United States to Mexico.  This despite rising and record corporate profits during a period of stagnating median working class incomes.  Free trade and globalization have been cited as the cause, but this involves more hand waving and the invocation of mantras, rather than analysis.  There has also been the predictable invocations of the Ayn Randian cult and the pseudoscience* of Social Darwinism.  Those on the opposite side of the debate characterize things as a morality play, with the public good versus greed being the main issue.  All of these explanations miss their mark, some more than others.

An article setting aside a few myths was recently published by Jonathan Rothwell at Brookings, which came to me via Mark Thoma’s blog, in the article, “Make elites compete: Why the 1% earn so much and what to do about it”.  Rothwell looks at the relative gains of the market over the last 40 years and finds that corporate profits, while doing well, have not been the driver of inequality that Robert Reich and other economists would have it be.  In looking at another myth that has been promulgated by Greg Mankiw, he finds that the rewards of one’s labors is not related to any special intelligence or skill.  On the contrary, one’s entry into the 1% is actually related to what industry one chooses to enter, regardless of all other factors.  This disparity is known as a “pay premium”.  As expected, petroleum and coal products, financial instruments, financial institutions, and lawyers, are at the top of the pay premium.  What is not, against all expectations of popular culture and popular economic writing, is the IT industry–hardware, software, etc.  Though they are the poster children of new technology, Bill Gates, Mark Zuckerburg, and others are the exception to the rule in an industry that is marked by a 90% failure rate.  Our most educated and talented people–those in science, engineering, the arts, and academia–are poorly paid–with negative pay premiums associated with their vocations.

The financialization of the economy is not a new or unnoticed phenomenon.  Kevin Phillips, in Wealth and Democracy, which was written in 2003, noted this trend.  There have been others.  What has not happened as a result is a national discussion on what to do about it, particularly in defining the term “sustainability”.

For those of us who have worked in the acquisition community, the practical impact of financialization and de-industrialization have made logistics challenging to say the least.  As a young contract negotiator and Navy Contracting Officer, I was challenged to support the fleet when any kind of fabrication or production was involved, especially in non-stocked machined spares of any significant complexity or size.  Oftentimes my search would find that the company that manufactured the items was out of business, its pieces sold off during Chapter 11, and most of the production work for those items still available done seasonally out of country.  My “out” at the time–during the height of the Cold War–was to take the technical specs, which were paid for and therefore owned by the government, to one of the Navy industrial activities for fabrication and production.  The skillset for such work was still fairly widespread, supported by the quality control provided by a fairly well-unionized and trade-based workforce–especially among machinists and other skilled workers.

Given the new and unique ways judges and lawyers have applied privatized IP law to items financed by the public, such opportunities to support our public institutions and infrastructure, as I was able, have been largely closed out.  Furthermore, the places to send such work, where possible, have also gotten vanishingly smaller.  Perhaps digital printing will be the savior for manufacturing that it is touted to be.  What it will not do is stitch back the social fabric that has been ripped apart in communities hollowed out by the loss of their economic base, which, when replaced, comes with lowered expectations and quality of life–and often shortened lives.

In the end, though, such “fixes” benefit a shrinkingly few individuals at the expense of the democratic enterprise.  Capitalism did not exist when the country was formed, despite the assertion of polemicists to link the economic system to our democratic government.  Smith did not write his pre-modern scientific tract until 1776, and much of what it meant was years off into the future, and its relevance given what we’ve learned over the last 240 years about human nature and our world is up for debate.  What was not part of such a discussion back then–and would not have been understood–was the concept of sustainability.  Sustainability in the study of healthy ecosystems usually involves the maintenance of great diversity and the flourishing of life that denotes health.  This is science.  Economics, despite Keynes and others, is still largely rooted in 18th and 19th century pseudoscience.

I know of no fix or commitment to a sustainability manifesto that includes global, environmental, and social sustainability that makes this possible short of a major intellectual, social or political movement willing to make a long-term commitment to incremental, achievable goals toward that ultimate end.  Otherwise it’s just the mental equivalent to camping out in Zuccotti Park.  The anger we note around us during this election year of 2016 (our year of discontent) is a natural human reaction to the end of an idea, which has outlived its explanatory power and, therefore, its usefulness.  Which way shall we lurch?

The Sustainability Manifesto for Projects, then, is a modest proposal.  It may also simply be a sign of the times, albeit a rational one.  As such, it leaves open a lot of questions, and most of these questions cannot be addressed or determined by the people to which it is targeted: project managers, who are usually simply employees of a larger enterprise.  People behave as they are treated–to the incentives and disincentives presented to them, oftentimes not completely apparent on the conscious level.  Thus, I’m not sure if this manifesto hits its mark or even the right one.

*This term is often misunderstood by non-scientists.  Pseudoscience means non-science, just as alternative medicine means non-medicine.  If any of the various hypotheses of pseudoscience are found true, given proper vetting and methodology, that proposition would simply be called science.  Just as alternative methods of treatment, if found effective and consistent, given proper controls, would simply be called medicine.

Measure for Measure — Must Read: Dave Gordon Is Looking for Utilitarian Metrics at AITS.org

Dave Gordon at his AITS.org blog deals with the issue of metrics and what makes them utilitarian, this is, “actionable.”  Furthermore at his Practicing IT Project Management blog he challenges those in the IT program management community to share real life examples.  The issue of measures and whether they pass the “so-what?” test in an important one, since chasing, and drawing improper conclusions from, the wrong ones are a waste of money and effort at best, and can lead one to make very bad business decisions at worst.

In line with Dave’s challenge, listed below are the types of metrics (or measures) that I often come across.

1.  Measures of performance.  This type of metric is characterized by actual performance against a goal for a physical or functional attribute of the system being developed.  It can be measured across time as one of the axes, but the ultimate benchmark against what is being measured is against the requirement or goal.  Technical performance measurements often fall into this category, though I have seen instances where these TPM is listed in its own category.  I would argue that such separation is artificial.

2.  Measures of progress.  This type of metric is often time-based, oftentimes measured against a schedule or plan.  Measurement of schedule variances in terms of time or expenditure rates against a budget often fall into this category.

3.  Measures of compliance.  This type of metric is one that measures systemic conditions that must be met which, if not, indicates a fatal error in the integrity of the system.

4.  Measures of effectiveness.  This type of metric tracks against those measures related to the operational objectives of the project, usually specified under particular conditions.

5.  Measures of risk.  This type of metric measures quantitatively the effects of qualitative, systemic, and inherent risk.  Oftentimes qualitative and quantitative risk are separated, which is the means of identification and whether that means is recorded either indirectly or directly.  But, in reality, they are measuring different aspects and causes of the same phenomenon.

6.  Measures of health.  This type of metric measures the relative health of a system against a set of criteria.  In medicine there are a set of routine measures for biological subjects.  Measures of health distinguish themselves from measures of compliance in that any variation, while indicative of a possible problem, is not necessarily fatal.  Thus, a range of acceptable indicators or even some variation within the indicators can be acceptable.  So while these measures may point to a system issue, borderline areas may warrant additional investigation.

In any project management system, there are often correct and incorrect ways of constructing these measures.  The basis for determining whether they are correct, I think, is whether the end result metric possesses materiality and traceability to a particular tangible state or criteria.  According to Dave and others, a test of a good metric is whether it is “actionable”.  This is certainly a desirable characteristic, but I would suggest not a necessary one and is contained within materiality and traceability.

For example, some metrics are simply indicators, which suggest further investigation; others suggest an action when viewed in combination with others.  There is no doubt that the universe of “qualitative” measures is shrinking as we have access to bigger and better data that provide us with quantification.  Furthermore as stochastic and other mathematical tools develop, we will have access to more sophisticated means of measurement.  But for the present there will continue to be some of these non-quantifiable measures only because, with experience, we learn that there are different dimensions in measuring the behavior of complex adaptive systems over time that are yet to be fully understood, much less measured.

I also do not mean for this to be an exhaustive list.  Others that have some overlap to what I’ve listed come to mind, such as measures of efficiency (different than effectiveness and performance in some subtle ways), measures of credibility or fidelity (which has some overlap with measures of compliance and health, but really points to a measurement of measures), and measures of learning or adaptation, among others.

When You’re a Jet You’re a Jet all the Way — Software as a Change Agent for Professional Development

Earlier in the week Dave Gordon at his blog responded to my post on data normalization and rightly introduced the need for data rationalization.  I had omitted the last concept in my own post, but strongly implied that the two were closely aligned in my broad definition of normalization beyond the boundaries of eliminating redundancies.  In the end in thinking about this, I prefer Dave’s dichotomy because it more clearly defines what we are doing.

Later in the week I found myself elaborating on these issues in discussions with customers and other professionals in the project management discipline.  In the projects in which I am involved, what I have found is that the process of normalizing and rationalizing data, even historical data which, contrary to Dave’s assertion can be maintained–at least in my business–acts as a change agent in defining the agnostic characteristics what defines the type of data being normalized and rationalized.

What I mean here is that, for instance, a time-phased CPM schedule that eventually becomes an integrated master schedule has an analogue.  For years we have been told, mostly by marketing types working for software manufacturers, that there is a secret sauce that they provide that cannot be reconciled against their competitors.  As a result, entire professional organizations, conferences, white papers, and presentations have been given to prove this assertion.  When looking at the data, however, the assertion is invalid.

The key differentiator between CPM scheduling applications is the optimization engine.  That is the secret sauce and the black box where the valuable IP lies.  It is the algorithms in the optimization that identifies for us those schedule activities that are on the critical and near critical paths.  But when you run these engines side by side on the same schedule, their results are well within one standard deviation of one another.

Keep in mind that I’m talking about differences in data related to normalization and rationalization and whether these differences can be reconciled.  There are other differences in features between the applications that do make a difference in their use and functionality: whether they can lock down a baseline, manage multiple baselines, prevent future work from being planned and executed in the past (yes, this happens), handle hammocks, scale properly, etc.  Because of these functional differences the same data may have been given a different value in the table or the file.  As Dave Gordon rightly points out, reconciling what on the surface are irreconcilable values requires specialized knowledge.  Well, if you have that specialized knowledge then you can achieve what otherwise seems impossible.  Once you achieve this “impossible” feat, it quickly becomes apparent that the features and functions involved are based on a very limited number of values that are common across CPM scheduling applications.

This should not be surprising for those of you out there that have been doing this a long time.  Back in the 1980s we would use visual display boards to map out short segments of the schedule.  We would manually construct schedules in very rudimentary (by today’s standards) mainframe computers and get very long dot matrix representations of the schedule to tape to the “War Room” walls.  The resources, risks, etc. had to be drawn on the schedule.  This manual process required an understanding of CPM schedule construction similar to someone still using long division today.  There was actually a time when people had to memorize their log and square root tables.  it was not very efficient, but the deep understanding of the analogue schedule has since been lost with the introduction of new technology.  This came to mind when I saw on LinkedIn a question of the types of questions that should be asked of a master scheduler in an interview.

As a result of new technology, schedulers aligned themselves into camps based on the application they selected or was selected for them in their job.  Over time I have seen brand loyalty turn into partisanship.  Once again, this should not be surprising.  If you spent ten years of your career on a very popular scheduling application, anything that may undermine one’s investment in that choice–and which makes employment possible–will be deemed as a threat.

I first came upon this behavior years ago when I was serving as CIO for a project management organization.  Some PMOs could not share information–not even e-mail and documents–because most were using PCs and some were using Macs.  Problem was that the key PMO was using Mac.  This was before Microsoft and Apple got together and solved this for us.  Needless to say this undermined organizational effectiveness.  My attempt to get everyone on the same page in terms of operating system compatibility sparked a significant backlash.  Luckily for me Microsoft soon introduced its first solution to address this issue.  So, in the end, the “Macintites”, as we good naturedly called them, could use their Macs for business common to other parts of the organization.

This almost cultish behavior finds itself in new places today: in the iPhone and Droid wars, in the use of Agile, and among CPM scheduling application partisans.  It is true that those of us in the software industry certainly want to see brand loyalty.  It is one of the key measures of success in proving the product’s value and effectiveness.  But it need not undermine the fact that a scheduler is a key specialist in the project management discipline.  If you are a Jet, you don’t need to be a Jet all the way.

Since creating generic analogues of schedules from submitted third-party data, I have found that insights into project performance can be noted that previously were not available.  The power of digitization along with normalization and rationalization allows the data to be effectively integrated at the proper point of intersection with other dimensions of project performance such as cost performance and risk.  Freed from the shackles of having to learn the specific idiosyncrasies of particular applications, the deep understanding of scheduling is being reintroduced.  This is a long time in coming.

One-Trick Pony — Software apps and the new Project Management paradigm

Recently I have been engaged in an exploration and discussion regarding the utilization of large amounts of data and how applications derive importance from that data.  In an on-line discussion with the ever insightful Dave Gordon, I first postulated that we need to transition into a world where certain classes of data are open so that the qualitative content can be normalized.  This is what for many years was called the Integrated Digital Environment (IDE for short).  Dave responded with his own post at the AITS.org blogging alliance, countering that while such standards are necessary in very specific and limited applications, that modern APIs provide most of the solution.  I then responded directly to Dave here, countering that IDE is nothing more than data neutrality.  Then also at AITS.org I expanded on what I proposed to be a general approach in understanding big data, noting the dichotomy in the software approaches that organize the external characteristics of the data to generalize systems and note trends, as opposed to those that are focused on the qualitative content within the data.

It should come as no surprise then, given these differences in approaching data, that we also find similar differences in the nature of applications that are found on the market.  With the recent advent of on-line and hosted solutions, there are literally thousands of applications in some categories of software that propose to do one thing with data, or that are focused one-trick pony applications that can be mixed and matched to somehow provide an integrated solution.

There are several problems with this sudden explosion of applications of this nature.

The first is in the very nature of the explosion.  This is a classic tech bubble, albeit limited to a particular segment of the software market, and it will soon burst.  As soon as consumers find that all of that information traveling over the web with the most minimal of protections is compromised by the next trophy hack, or that too many software providers have entered the market prematurely–not understanding the full needs of their targeted verticals–it will hit like the last one in 2000.  It only requires a precipitating event that triggers a tipping point.

You don’t have to take my word for it.  Just type in a favorite keyword into your browser now (and I hope you’re using VPN doing it) for a type of application for which you have a need–let’s say “knowledge base” or “software ticket systems.”  What you will find is that there are literally hundreds if not thousands of apps built for this function.  You cannot test them all.  Basic information economics, however, dictates that you must invest some effort in understanding the capabilities and limitations of the systems on the market.  Surely there are a couple of winners out there.  But basic economics also dictates that 95% of those presently in the market will be gone in short order.  Being the “best” or the “best value” does not always win in this winnowing out.  Certainly chance, the vagaries of your standing in the search engine results, industry contacts–virtually any number of factors–will determine who is still standing and who is gone a year from now.

Aside from this obvious problem with the bubble itself, the approach of the application makers harkens back to an earlier generation of one-off applications that attempt to achieve integration through marketing while actually achieving, at best, only old-fashioned interfacing.  In the world of project management, for example, organizations can little afford to revert to the division of labor, which is what would be required to align with these approaches in software design.  It’s almost as if, having made their money in an earlier time, that software entrepreneurs cannot extend themselves beyond their comfort zones in taking advantage of the last TEN software generations that provide new, more flexible approaches to data optimization.  All they can think to do is party like it’s 1995.

For the new paradigm in project management is to get beyond the traditional division of labor.  For example, is scheduling such a highly specialized discipline rising to the level of a profession that it is separate from all of the other aspects of project management?  Of course not.  Scheduling is a discipline–a sub-specialty actually–that is inextricably linked to all other aspects of project management in a continuum.  The artifacts of the process of establishing project systems and controls constitutes the project itself.

No doubt there are entities and companies that still ostensibly organize themselves into specialties as they did twenty years ago: cost analysts, schedule analysts, risk management specialists, among others.  But given that the information from the these systems: schedule, cost management, project financial management, risk management, technical performance, and all the rest, can be integrated at the appropriate level of their interrelationships to provide us a cohesive, holistic view of the complex system that we call a project, is such division still necessary?  In practice the industry has already moved to position itself to integration, realizing the urgency of making the shift.

For example, to utilize an application to query cost management information in 1995 was a significant achievement during the first wave of software deployment that mimicked the division of labor.  In 2015, not so much.  Introducing a one-trick pony EVM “tool” in 2015 is laziness–hoping to turn back the clock in ignoring the obsolescence of such an approach–regardless of which slick new user interface is selected.

I recently attended a project management meeting of senior government and industry representatives.  During one of my side sessions I heard a colleague propose the discipline of Project Management Analyst in lieu of previously stove-piped specialties.  His proposal is a breath of fresh air in an industry that develops and manufacturers the latest aircraft and space technology, but has hobbled itself with systems and procedures designed for an earlier era that no longer align with the needs of doing business.  I believe the timely deployment of systems has suffered as a result during this period of transition. 

Software must lead, and accelerate the transition to the new integration paradigm.

Thus, in 2015 the choice is not between data that adheres to conventions of data neutrality, or to those that utilize data access via APIs, but in favor of applications that do both.

It is not between different hard-coded applications that provide the old “what-you-see-is-what-you-get” approach.  It is instead between such limited hard-coded applications, and those that provide flexibility so that business managers can choose among a nearly unlimited pallet of choices of how and which data, converted into information, is available to the user or classes of user based on their role and need to know; aggregated at the appropriate level of detail for the consumer to derive significance from the information being presented.

It is not between “best-of-breed” and “mix-and-match” solutions that leverage interfaces to achieve integration.  It is instead between such solution “consortiums” that drive up implementation and sustainment costs, bringing with them high overhead, against those that achieve integration by leveraging the source of the data itself, reducing the number of applications that need to be managed, allowing data to be enriched in an open and flexible environment, achieving transformation into useful information.

Finally, the choice isn’t among applications that save their attributes in a proprietary format so that the customer must commit themselves to a proprietary solution.  Instead, it is between such restrictive applications and those that open up data access, clearly establishing that it is the consumer that owns the data.

Note: I have made minor changes from the original version of this post for purposes of clarification.

Over at AITS.org Dave Gordon takes me to task on data normalization — and I respond with Data Neutrality

Dave Gordon at AITS.org takes me to task on my post regarding recommending using common schemas for certain project management data.  Dave’s alternative is to specify common APIs instead.   I am not one to dismiss alternative methods of reconciling disparate and, in their natural state, non-normalized data to find the most elegant solution.  My initial impression, though, is: been there, done that.

Regardless of the method used to derive significance from disparate sources of data that is of a common type, one still must obtain the cooperation of the players involved.  The ANSI X12 standard has been in use in the transportation industry for quite some time and has worked quite well, leaving the preference of proprietary solution up to the individual shippers.  The rule has been, however, that if you are going to write solutions for that industry that you need to allow the shipping info needed by any receiver to conform to a particular format so that it can be read regardless of the software involved.

Recently the U.S. Department of Defense, which had used certain ANSI X12 formats for particular data for quite some time has published and required a new set of schemas for a broader set of data under the rubric of the UN/CEFACT XML.  Thus, it has established the same approach as the transportation industry: taking an agnostic stand regarding software preferences while specifying that submitted data must conform to a common schema so that a proprietary file type is not given preference over another.

A little background is useful.  In developing major systems contractors are required to provide project performance data in order to ensure that public funds are being expended properly for the contracted effort.  This is the oversight responsibility portion of the equation.  The other side concerns project and program management.  Given the usual cost-plus contract type most often used, the government program management office in cooperation with its commercial counterpart looks to identify the manifestation of cost, schedule, and/or technical risk early enough to allow that risk to be handled as necessary.   Also, at the end of this process, which is only now being explored, is the usefulness of years of historical data across contract types, technologies, and suppliers that can be used to benefit the public interest by demonstrating which contractors perform better, to show the inherent risk associated with particular technologies through parametric methods, and a host of insights that can be derived through econometric project management trending and modeling.

So let’s assume that we can specify APIs in requesting the data in lieu of specifying that the customer can receive an application-agnostic file that can be read by any application that conforms with the data standard.  What is the difference?  My immediate observation is that is reverses the relationship in who owns the data.  In the case of the API the proprietary application becomes the gatekeeper.  In the case of an agnostic file structure it is open to everyone and the consumer owns the data.

In the API scenario large players can do what they want to limit competition and extensions to their functionality.  Since they can block box the manner in which data is structured, it also becomes increasingly difficult to make qualitative selections from the data.  The very example that Dave uses–the plethora of one-off mobile apps–usually must exist only in their own ecosystem.

So it seems to me that the real issue isn’t that Big Brother wants to control data structure.  What it comes down to is that specifying an open data structure defeats the ability of one or a group of solution providers from controlling the market through restrictions on accessing data.  This encourages maximum competition and innovation in the marketplace–Data Neutrality.

I look forward to additional information from Dave on this issue.  Each of the methods of achieving the end of Data Neutrality isn’t an end in itself.  Any method that is less structured and provides more flexibility is welcome.  I’m just not sure that we’re there yet with APIs.

The Times They Are A-Changin’–Should PMI Be a Project Management Authority?

Back from a pretty intense three weeks taking care of customers (yes–I have those) and attending professional meetings and conferences.  Some interesting developments regarding the latter that I will be writing about here, but while I was in transit I did have the opportunity to keep up with some interesting discussions within the project management community.

Central among those was an article by Anonymous on PM Hut that appeared a few weeks ago that posited the opinion that PMI Should No Longer Be an Authority on Project Management.  I don’t know why the author of the post decided that they had to remain anonymous.  I learned some time ago that one should not only state their opinion in as forceful terms as possible (backed up with facts), but to own that opinion and be open to the possibility that it could be wrong or require modification.  As stated previously in my posts, project management in any form is not received wisdom.

The author of the post makes several assertions summarized below:

a. That PMI, though ostensibly a not-for-profit organization, behaves as a for-profit organization, and aggressively so.

b.  The Project Management Body of Knowledge (PMBOK®) fails in its goal of being the definitive source for project management because it lacks continuity between versions, its prescriptions lack realism, and, particularly in regard to software project management, that this section has morphed into a hybrid of Waterfall and Agile methodology.

c.  The PMI certifications lack credibility and seem to be geared to what will sell, as opposed to what can be established as a bonafide discipline.

I would have preferred that the author had provided more concrete examples of these assertions, given their severity.  For example, going to the on-line financial statements of the organization, PMI does have a significant staff of paid personnel and directors, with total assets as of 2012 of over $300M.  Of this, about $267M is in investments.  It’s total revenue that year was $173M.  It spent only $115M from its cashflow on its programs and another $4M on governance and executive management compensation.  Thus, it would appear that the non-profit basis of the organization has significantly deviated from its origins at the Georgia Institute of Technology.  Project management is indeed big business with vesting and compensation of over $1M going to the President & CEO of the organization in 2012 alone.  Thus there does seem to be more than a little justification for the first of the author’s criticisms.

I also share in the author’s other concerns, but a complete analysis is not available regarding either the true value of the PMBOK® and the value of a PMP certification.  I have met many colleagues who felt the need to obtain the latter, despite their significant practical achievements and academic credentials.  I have also met quite a few people with “PMP” after their names whose expertise is questionable, at best.  I am reminded of the certifications given by PMI and other PM organizations today to a very similar condition several years ago when the gold standard of credentials in certain parts of the IT profession were the Certified Novell Engineer (CNE), and Microsoft Certified Solutions Expert (MCSE) certifications.  They still exist in some form.  What was apparent as I took the courses and the examinations was that the majority of my fellow students had never set up a network.  They were, to use the pejorative among the more experienced members among us, “Paper CNEs and MCSEs.”  In interviewing personnel with “PMP” after their name I find a wide variation in expertise, thus the quality of experience with supporting education tends to have more influence with me than some credential from one of the PM organizations.

Related to this larger issue of what constitutes a proper credential in our discipline, I came across an announcement by Dave Gordon at his The Practicing IT Project Manager blog of a Project Management Job Requirements study.  Dave references this study by Noel Radley of SoftwareAdvise.com that states that the PMP is preferred or specified by 79% of the 300 jobs used as the representative baseline for the industries studied.  Interestingly, the study showed that advanced education is rarely required or preferred.

I suspect that this correlates in a negative way with many of the results that we have seen in the project management community.  Basic economics dictates that people with advanced degrees (M.A. and M.B.A. grads) do come with a higher price than those who only have Baccalaureate degrees, their incomes rising much more than 4 year college grads.  It seems that businesses do not value that additional investment except by exception.

Additionally, I have seen the results of two studies presented in government forums over the past six months (but alas no links yet) where the biggest risk to the project was identified to be the project manager.  Combined with the consistent failure reported by widely disparate sources of the overwhelming majority of projects to perform within budget and be delivered on time raises the natural question as to whether those that we choose to be project managers have the essential background to perform the job.

There seems to be a widely held myth that formal education is somehow unnecessary to develop a project manager–relegating what at least masquerades as a “profession”–to the level of a technician or mechanic.  It is not that we do not need technicians or mechanics, it is that higher level skills are needed to be a successful project manager.

This myth seems to be spreading, and to have originated from the society as a whole, where the emphasis is on basic skills, constant testing, the elimination of higher level thinking, and a narrowing of the curriculum.  Furthermore, college education, which was widely available to post-World War II generations well into the 1980s, is quickly becoming unaffordable by a larger segment of the population.  Thus, what we are seeing is a significant skills gap in the project management discipline to add to one that already has had an adverse impact on the ability of both government and industry to succeed.  For example, a paper from Calleam Consulting Ltd in a paper entitled “The Story Behind the High Failure Rates in the IT Sector” found that “17 percent of large IT projects go so badly that they can threaten the very existence of the company.”

From my experiences over the last 30+ years, when looking for a good CTO or CIO I will look to practical and technical experience and expertise with the ability to work with a team.  For an outstanding coder I look for a commitment to achieve results and elegance in the final product.  But for a good PM give me someone with a good liberal arts education with some graduate level business or systems work combined with leadership.  Leadership includes all of the positive traits one demands of this ability: honesty, integrity, ethical behavior, effective personnel management, commitment, and vision.

The wave of the future in developing our expertise in project management will be the ability to look at all of the performance characteristics of the project and its place in the organization.  This is what I see as the real meaning of “Integrated Project Management.”  I have attended several events since the beginning of the year focused on the project management discipline in which assertions were made that “EVM is the basis for integrated project management” or “risk is the basis for integrated project management” or “schedule is the basis for integrated project management.”  The speakers did not seem to acknowledge that the specialty that they were addressing is but one aspect of measuring project performance, and even less of a factor in measuring program performance.

I believe that this is a symptom of excess specialization and lack of a truly professional standard in project management.  I believe that if we continue to hire technicians with expertise in one area, possessing a general certification that simply requires one to attend conferences and sit in courses that lack educational accreditation and claim credit for “working within” a project, we will find that making the transition to the next evolutionary step at the PM level will be increasingly difficult.  Finally, for the anonymous author critical of PMI it seems that project management is a good business for those who make up credentials but not such a good deal for those with a financial stake in project management.

Note:  This post has been modified to correct minor grammatical and spelling errors.

Full disclosure:  The author has been a member of PMI for almost 20 years, and is a current member and former board member of the College of Performance Management (CPM).