New Directions — Fourth Generation apps, Agile, and the New Paradigm

The world is moving forward and Moore’s Law is accelerating in interesting ways on the technology side, which opens new opportunities, especially in software.  In the past I have spoken of the flexibility of Fourth Generation software, that is, software that doesn’t rely on structured hardcoding, but instead, is focused on the data to deliver information to the user in more interesting and essential ways.  I work in this area for my day job, and so using such technology has tipped over more than a few rice bowls.

The response from entrenched incumbents and those using similar technological approaches in the industry focused on “tools” capabilities has been to declare vices as virtues.  Hard-coded applications that require long-term development and structures, built on proprietary file and data structures are, they declare, the right way to do things.  “We provide value by independently developing IP based on customer requirements,” they declare.  It sounds very reasonable, doesn’t it?  Only one problem: you have to wait–oh–a year or two to get that chart or graph you need, to refresh that user interface, to expand functionality, and you will almost never be able to leverage the latest capabilities afforded by the doubling of computing capability every 12 to 24 months.  The industry is filled with outmoded, poorly supported, and obsolete “tools’ already.  Guess it’s time for a new one.

The motivation behind such assertions, of course, is to slow things down.  Not possessing the underlying technology to provide more, better, and more powerful functionality to the customer quicker and more flexibly based on open systems principles, that is, dealing with data in an agnostic manner, they use their position to try to hold up disruptive entries from leaving them far behind.  This is done, especially in the bureaucratic complexities of A&D and DoD project management, through professional organizations that are used as thinly disguised lobbying opportunities by software suppliers such as the NDIA, or by appeals to contracting rules that they hope will undermine the introduction of new technologies.

All of these efforts, of course, are blowing into the wind.  The economics of the new technologies is too compelling for anyone to last long in their job by partying like it’s still 1997 under the first wave of software solutions targeted at data silos and stove-piped specialization.

The new paradigm is built on Agile and those technologies that facilitate that approach.  In case my regular readers think that I have become one of the Cultists, bowing before the Manfesto That May Not Be Named, let me assure you that is not the case.  The best articulation of Agile that I have read recently comes from Neil Killick, whom I have expressed some disagreement on the #NoEstimates debate and the more cultish aspects of Agile in past posts, but who published an excellent post back in July entitled “12 questions to find out: Are you doing Agile Software Development?”

Here are Neil’s questions:

  1. Do you want to do Agile Software Development? Yes – go to 2. No – GOODBYE.
  2. Is your team regularly reflecting on how to improve? Yes – go to 3. No – regularly meet with your team to reflect on how to improve, go to 2.
  3. Can you deliver shippable software frequently, at least every 2 weeks? Yes – go to 4. No – remove impediments to delivering a shippable increment every 2 weeks, go to 3.
  4. Do you work daily with your customer? Yes – go to 5. No – start working daily with your customer, go to 4.
  5. Do you consistently satisfy your customer? Yes – go to 6. No – find out why your customer isn’t happy, fix it, go to 5.
  6. Do you feel motivated? Yes – go to 7. No – work for someone who trusts and supports you, go to 2.
  7. Do you talk with your team and stakeholders every day? Yes – go to 8. No – start talking with your team and stakeholders every day, go to 7.
  8. Do you primarily measure progress with working software? Yes – go to 9. No – start measuring progress with working software, go to 8.
  9. Can you maintain pace of development indefinitely? Yes – go to 10. No – take on fewer things in next iteration, go to 9.
  10. Are you paying continuous attention to technical excellence and good design? Yes – go to 11. No – start paying continuous attention to technical excellent and good design, go to 10.
  11. Are you keeping things simple and maximising the amount of work not done? Yes – go to 12. No – start keeping things simple and writing as little code as possible to satisfy the customer, go to 11.
  12. Is your team self-organising? Yes – YOU’RE DOING AGILE SOFTWARE DEVELOPMENT!! No – don’t assign tasks to people and let the team figure out together how best to satisfy the customer, go to 12.

Note that even in software development based on Agile you are still “provid(ing) value by independently developing IP based on customer requirements.”  Only you are doing it faster and more effectively.

Now imagine a software technology that is agnostic to the source of data, that does not require a staff of data scientists, development personnel, and SMEs to care and feed it; that allows multiple solutions to be released from the same technology; that allows for integration and cross-data convergence to gain new insights based on Knowledge Discovery in Databases (KDD) principles; and that provides shippable, incremental solutions every two weeks or as often as can be absorbed by the organization, but responsively enough to meet multiple needs of the organization at any one time.

This is what is known as disruptive value.  There is no stopping this train.  It is the new paradigm and it’s time to take advantage of the powerful improvements in productivity, organizational effectiveness, and predictive capabilities that it provides.  This is the power of technology combined with a new approach to “small” big data, or structured data, that is effectively normalized and rationalized to the point of breaking down proprietary barriers, hewing to the true meaning of making data–and therefore information–both open and accessible.

Furthermore, such solutions using the same data streams produced by the measurement of work can also be used to evaluate organizational and systems compliance (where necessary), and effectiveness.  Combined with an effective feedback mechanism, data and technology drive organizational improvement and change.  There is no need for another tool to layer with the multiplicity of others, with its attendant specialized training, maintenance, and dead-end proprietary idiosyncrasies.  On the contrary, such an approach is an impediment to data maximization and value.

Vices are still vices even in new clothing.  Time to come to the side of the virtues.

Over at AITS.org — Black Swans: Conquering IT Project Failure & Acquisition Management

It’s been out for a few days but I failed to mention the latest article at AITS.org.

In my last post on the Blogging Alliance I discussed information theory, the physics behind software development, the economics of new technology, and the intrinsic obsolescence that exists as a result. Dave Gordon in his regular blog described this work as laying “the groundwork for a generalized theory of managing software development and acquisition.” Dave has a habit of inspiring further thought, and his observation has helped me focus on where my inquiries are headed…

To read more please click here.

Super Doodle Dandy (Software) — Decorator Crabs and Wirth’s Law

decorator-crab[1]

The song (absent the “software” part) in the title is borrowed from the soundtrack of the movie, The Incredible Mr. Limpet.  Made in the day before Pixar and other recent animation technologies, it remains a largely unappreciated classic; combining photography and animation in a time of more limited tools, but with Don Knotts creating another unforgettable character beyond Barney Fife.  Somewhat related to what I am about to write, Mr. Limpet taught the creatures of the sea new ways of doing things, helping them overcome their mistaken assumptions about the world.

The photo that opens this post is courtesy of the Monterey Aquarium and looks to be the crab Oregonia gracilis, commonly referred to as the Graceful Decorator Crab.  There are all kinds of Decorator Crabs, most of which belong to the superfamily Majoidea.  The one I most often came across and raised in aquaria was Libinia dubia, an east coast cousin.  You see, back in a previous lifetime I had aspirations to be a marine biologist.  My early schooling was based in the sciences and mathematics.  Only later did I gradually gravitate to history, political science, and the liberal arts–finally landing in acquisition and high tech project management, which tends to borrow something from all of these disciplines.  I believe that my former concentration of studies have kept me grounded in reality–in viewing life the way it is and the mysteries that are yet to be solved in the universe absent resort to metaphysics or irrationality–while the latter concentrations have connected me to the human perspective in experiencing and recording existence.

But there is more to my analogy than self-explanation.  You see, software development exhibits much of the same behavior of Decorator Crabs.

In my previous post I talk about Moore’s Law and the compounding (doubling) of greater processor power in computing every 12 to 24 months.  (It does not seem to be as much a physical law as an observation, and we can only guess how long this trend will continue).  We also see a corresponding reduction in cost vis-à-vis this greater capability.  Yet, despite these improvements, we find that software often lags behind and fails to leverage this capability.

The observation that has recorded this phenomenon is found in Wirth’s Law, which posits that software is getting slower at a faster rate than computer hardware is getting faster.  There are two variants of this law, one ironic and the other only less so.  These are May’s and Gates’ variants.  Basically these posit that software speed halves every 18 months, thereby negating Moore’s Law.  But why is this?

For first causes one need only look to the Decorator Crab.  You see, the crab, all by itself, is a typical crab: an arthropod invertebrate with a hard carapace, spikes on its exoskeleton, segmented body with jointed limbs, five pairs of legs, the first pair of legs usually containing chelae (the familiar pincers and claws).  There are all kinds of crabs in salt, fresh, and brackish water.  They tend to be well adapted to their environment.  But they are also tasty and high in protein value, thus having a number of predators.  So the Decorator Crab has determined that what evolution has provided is not enough–it borrows features and items from its environment to enhance its capabilities as a defense mechanism.  There is a price to being a Decorator Crab.  Encrustations also become encumbrances.  Where crabs have learned to enhance their protections, for example by attaching toxic sponges and anemones, these enhancements may also have made them complaisant because, unlike most crabs, Decorator Crabs don’t tend to scurry from crevice to crevice, but tend to walk awkwardly and more slowing than many of their cousins in the typical sideways crab gait.  This behavior makes them interesting, popular, and comical subjects in both public and private aquaria.

In a way, we see an analogy in the case of software.  In earlier generations of software design, applications were generally built to solve a particular challenge that mimicked the line and staff structure of the organizations involved–designed to fit its environmental niche.  But over time, of course, people decide that they want enhancements and additional features.  The user interface, when hardcoded, must be adjusted every time a new function or feature is added.

Rather than rewriting the core code from scratch–which will take time and resource-consuming reengineering and redesign of the overall application–modules, subroutines, scripts, etc. are added to software to adapt to the new environment.  Over time, software takes on the characteristics of the Decorator Crab.  The new functions are not organic to the core structure of the software, just as the attached anemone, sponges, and algae are not organic features of the crab.  While they may provide the features desired, they are not optimized, tending to use brute force computing power as the means of accounting for lack of elegance.  Thus, the more powerful each generation of hardware computing power tends to provide, the less effective each enhancement release of software tends to be.

Furthermore, just as when a crab tends to look less like a crab, it requires more effort and intelligence to identify the crab, so too with software.  The greater the encrustation of features that tend to attach themselves to an application, the greater the effort that is required to use those new features.  Learning the idiosyncrasies of the software is an unnecessary barrier to the core purposes of software–to increase efficiency, improve productivity, and improve speed.  It serves only one purpose: to increase the “stickiness” of the application within the organization so that it is harder to displace by competitors.

It is apparent that this condition is not sustainable–or acceptable–especially where the business environment is changing.  New software generations, especially Fourth Generation software, provide opportunities to overcome this condition.

Thus, as project management and acquisition professionals, the primary considerations that must be taken into account are optimization of computing power and the related consideration of sustainability.  This approach militates against complacency because it influences the environment of software toward optimization.  Such an approach will also allow organizations to more fully realize the benefits of Moore’s Law.

Over at AITS.org — Maxwell’s Demon: Planning for Obsolescence in Acquisitions

I’ve posted another article at AITS.org’s Blogging Alliance, this one dealing with the issue of software obsolescence and the acquisition strategy that applies given what we know about the nature of software.  I also throw in a little background on information theory and the physical limitations of software as we now know it (virtually none).  As a result, we require a great deal of agility inserted into our acquisition systems for new technologies.  I’ll have a follow up article over there that provides specifics on acquisition planning and strategies.  Random thoughts on various related topics will also appear here.  Blogging has been sporadic of late due to op-tempo but I’ll try to keep things interesting and more frequent.

Ch-ch Changes — Software Implementations and Organizational Process Improvement

Dave Gordon at The Practicing IT Project Manager lists a number of factors that define IT project success.  Among these is “Organizational change management efforts were sufficient to meet adoption goals.”  This is an issue that I am grappling with now on many fronts.

The initial question that comes to mind is which comes first–the need for organizational improvement or the transformation that comes results as a result of the introduction of new technology?  “Why does this matter?” one may ask.  The answer is that it defines how things are perceived by those that are being affected (or victimized) by the new technology.  This will then translate into various behaviors.  (Note that I did not say that “Perception is reality.”  For the reason why please consult the Devil’s Phraseology.)

This is important because the groundwork laid (or not laid) for the change that is to come will then translate into sub-factors (accepting Dave’s taxonomy of factors for success) that will have a large impact on the project, and whether it is defined as a success.  In getting something done the most overriding priority is not just “Gettin’ ‘Er Done.”  The manner in which our projects, particularly in IT, are executed and the technology introduced and implemented will determine the success of a number of major factors that contribute to overall project success.

Much has been written lately about “disruptive” change, and that can be a useful analogy when applied to new technologies that transform a market by providing something that is cheaper, better, and faster (with more functionality) than the market norm.  I am driving that type of change in my own target markets.  But that is in a competitive environment.  Judgement–and good judgement–requires that we not inflict this cultural approach on the customer.

The key, I think, is bringing back a concept and approach that seems to have been lost in the shuffle: systems analysis and engineering that works hand-in-hand with the deployment of the technological improvement.  There was a reason for asking for the technology in the first place, whether it be improved communications, improved productivity, or qualitative factors.  Going in willy-nilly with a new technology that provides unexpected benefits–even if those benefits are both useful and will improve the work process–can often be greeted with fear, sabotage, and obstruction.

When those of us who work with digital systems encounter someone challenged by the new introduction of technology or fear that “robots are taking our jobs,” our reaction is often an eye-roll, treating these individuals as modern Luddites.  But that is a dangerous stereotype.  Our industry is rife with stories of individuals who fall into this category.  Many of them are our most experienced middle managers and specialists who predate the technology being introduced.  How long does it take to develop the expertise to fill these positions?  What is the cost to the organization if their corporate knowledge and expertise is lost?  Given that they have probably experienced multiple reorganizations and technology improvements, their skepticism is probably warranted.

I am not speaking of the exception–the individual who would be opposed to any change.  Dave gives a head nod to the CHAOS report, but we also know that we come upon these reactions often enough to be documented from a variety of sources.  So how to we handle these?

There are two approaches.  One is to rely upon the resources and management of the acquiring organization to properly prepare the organization for the change to come, and to handle the job of determining the expected end state of the processes, and the personnel implications that are anticipated.  Another is for the technology provider to offer this service.

From my own direct experience, what I see is a lack of systems analysis expertise that is designed to work hand-in-hand with the technology being introduced.  For example, systems analysis is a skill that is all but gone in government agencies and large companies, which rely more and more on outsourcing for IT support.  Oftentimes the IT services consultant has its own agenda, which oftentimes conflicts with the goals of both the manager acquiring the technology and the technology provider.  Few outsourced IT services contracts anticipate that the consultant must act as an enthusiastic–as opposed to tepid (at best) willing–partner in these efforts.  Some agencies lately have tasked the outsourced IT consultant to act as honest broker to choose the technology, mindless of the strategic partnering and informal relationships that will result in a conflict of interest.

Thus, technology providers must be mindful of their target markets and design solutions to meet the typical process improvement requirements of the industry.  In order to do this the individuals involved must have a unique set of skills that combines a knowledge of the goals of the market actors, their processes, and how the technology will improve those processes.  Given this expertise, technology providers must then prepare the organizational environment to set expectations and to advance the vision of the end state–and to ensure that the customer accepts that end state.  It is then up to the customer’s management, once the terms of expectations and end-state have been agreed, to effectively communicate them to those personnel affected, and to do so in a way to eliminate fear and to generate enthusiasm that will ensure that the change is embraced and not resisted.

Mo’Better Risk — Tournaments and Games of Failure Part II

My last post discussed economic tournaments and games of failure in how they describe the success and failure of companies, with a comic example for IT start-up companies.  Glen Alleman at his Herding Cats blog has a more serious response in handily rebutting those who believe that #NoEstimates, Lean, Agile, and other cult-like fads can overcome the bottom line, that is, apply a method to reduce inherent risk and drive success.  As Glen writes:

“It’s about the money. It’s always about the money. Many want it to be about them or their colleagues, or the work environment, or the learning opportunities, or the self actualization.” — Glen Alleman, Herding Cats

Perfectly good products and companies fail all the time.  Oftentimes the best products fail to win the market, or do so only fleetingly.  Just think of the roles of the dead (or walking dead) over the years:  Novell, WordPerfect, Visicalc, Harvard Graphics; the list can go on and on.  Thus, one point that I would deviate from Glen is that it is not always EBITDA.  If that were true then both Facebook and Amazon would not be around today.  We see tremendous payouts to companies with promising technologies acquired for outrageous sums of money, though they have yet to make a profit.  But for every one of these there are many others that see the light of day for a moment and then flicker out of existence

So what is going on and how does this inform our knowledge of project management?  For the measure of our success is time and money, in most cases.  Obviously not all cases.  I’ve given two cases of success that appeared to be failure in previous posts to this blog: the M1A1 Tank and the ACA.  The reason why these “failures” were misdiagnosed was that the agreed measure(s) of success were incorrect.  Knowing this difference, where, and how it applies is important.

So how do tournaments and games of failure play a role in project management?  I submit that the lesson learned from these observations is that we see certain types of behaviors that are encouraged that tend to “bake” certain risks into our projects.  In high tech we know that there will be a thousand failures for every success, but it is important to keep the players playing–at least it is in the interest of the acquiring organization to do so, and is in the public interest in many cases as well.  We also know that most IT projects by most measures–both contracted out and organic–tend to realize a high rate of failure.  But if you win an important contract or secure an important project, the rewards can be significant.

The behaviors that are reinforced in this scenario on the part of the competing organization is to underestimate the cost and time involved in the effort; that is, so-called “bid to win.”  On the acquiring organization’s part, contracting officers lately have been all too happy to award contracts they know to be too low (and normally out of the competitive range) even though they realize it to be significantly below the independent estimate.  Thus “buying in” provides a significant risk that is hard to overcome.

Other behaviors that we see given the project ecosystem are the bias toward optimism and requirements instability.

In the first case, bias toward optimism, we often hear project and program managers dismiss bad news because it is “looking in the rear view mirror.”  We are “exploring,” we are told, and so the end state will not be dictated by history.  We often hear a version of this meme in cases where those in power wish to avoid accountability.  “Mistakes were made” and “we are focused on the future” are attempts to change the subject and avoid the reckoning that will come.  In most cases, however, particularly in project management, the motivations are not dishonest but, instead, sociological and psychological.  People who tend to build things–engineers in general, software coders, designers, etc.–tend to be an optimistic lot.  In very few cases will you find one of them who will refuse to take on a challenge.  How many cases have we presented a challenge to someone with these traits and heard the refrain:  “I can do that.”?  This form of self-delusion can be both an asset and a risk.  Who but an optimist would take on any technically challenging project?  But this is also the trait that will keep people working to the bitter end in a failure that places the entire enterprise at risk.

I have already spent some bits in previous posts regarding the instability of requirements, but this is part and parcel of the traits that we see within this framework.  Our end users determine that given how things are going we really need additional functionality, features, or improvements prior to the product roll out.  Our technical personnel will determine that for “just a bit more effort” they can achieve a higher level of performance or add capabilities at marginal or tradeoff cost.  In many cases, given the realization that the acquisition was a buy-in, project and program managers allow great latitude in accepting as a change an item that was assumed to be in the original scope.

There is a point where one or more of these factors is “baked in” into the course that the project will take.  We can delude ourselves into believing that we can change the course of the trajectory of the system through the application of methods: Agile, Lean, Six Sigma, PMBOK, etc. but, in the end, if we exhaust our resources without a road map on how to do this we will fail.  Our systems must be powerful and discrete enough to note the trend that is “baked in” due to factors in the structure and architecture of the effort being undertaken.  This is the core risk that must be managed in any undertaking.  A good example that applies to a complex topic like Global Warming was recently illustrated by Neil deGrasse Tyson in the series Cosmos:

In this example Dr. Tyson is climate and the dog is the weather.  But in our own analogy Dr. Tyson can be the trajectory of the system with the dog representing the “noise” of periodic indicators and activity around the effort.  We often spend a lot of time and effort (which I would argue is largely unproductive) on influencing these transient conditions in simpler systems rather than on the core inertia of the system itself.  That is where the risk lies. Thus, not all indicators are the same.  Some are measuring transient anomalies that have nothing to do with changing the core direction of the system, others are more valuable.  These latter indicators are the ones that we need to cultivate and develop, and they reside in an initial measurement of the inherent risk of the system largely based on its architecture that is antecedent to the start of the work.

This is not to say that we can do nothing about the trajectory.  A simpler system can be influenced more easily.  We cannot recover the effort already expended–which is why even historical indicators are important.  It is because they inform our future expectations and, if we pay attention to them, they keep us grounded in reality.  Even in the case of Global Warming we can change, though gradually, what will be a disastrous result if we allow things to continue on their present course.  In a deterministic universe we can influence the outcomes based on the contingent probabilities presented to us over time.  Thus, we will know if we have handled the core risk of the system by focusing on these better indicators as the effort progresses.  This will affect its trajectory.

Of course, a more direct way of modifying these risks is to make systemic adjustments.  Do we really need a tournament-based system as it exists and is the waste inherent in accepting so much failure really necessary?  What would that alternative look like?

Take Me Out to the Ballgame — Tournaments and Games of Failure

“Baseball teaches us, or has taught most of us, how to deal with failure. We learn at a very young age that failure is the norm in baseball and, precisely because we have failed, we hold in high regard those who fail less often – those who hit safely in one out of three chances and become star players. I also find it fascinating that baseball, alone in sport, considers errors to be part of the game, part of it’s rigorous truth.” — Fay Vincent, former Commissioner of Baseball (1989-1992)

“Baseball is a game of inches.”  — Branch Rickey, Quote Magazine, July 31, 1966

I have been a baseball fan just about as long as I have been able to talk.  My father played the game and tried out for both what were the New York Giants and Yankees–and was a pretty well known local hero in Weehawken back in the 1930s and 1940s.  I did not have my father’s athletic talents–a four letter man in high school–but I was good at hitting a baseball from the time he put a bat in my hands and so I played–and was sought after–into my college years.  Still, like many Americans who for one reason or another could not or did not pursue the game, I live vicariously through the players on the field.  We hold those who fail less in the game in high regard.  Some of them succeed for many years and are ensconced in the Hall of Fame.

Others experienced fleeting success.  Anyone who watches ESPN’s or the Yes Channel’s classic games, particularly those from the various World Series, can see this reality in play.  What if Bill Buckner in 1986 hadn’t missed that ball?  What if Bobby Richardson had not been in perfect position to catch what would have been a game and series winning liner by Willie McCovey in 1962?  Would Brooklyn have every won a series if Amoros hadn’t caught Berra’s drive down the left field line in 1955?  The Texas Rangers might have their first World Series ring if not for a plethora of errors, both mental and physical, in the sixth game of the 2011 Series.  The list can go on and it takes watching just a few of these games to realize that luck plays a big part in who is the victor.

There are other games of failure that we deal with in life, though oftentimes we don’t recognize them as such.  In economics these are called “tournaments,” and much like their early Medieval predecessor (as opposed to the stylized late Medieval and Renaissance games), the stakes are high.  In pondering the sorry state of my favorite team–the New York Yankees–as I watched seemingly minor errors and failures cascade into a humiliating loss, I came across a blog post by Brad DeLong, distinguished professor of economics at U.C. Berkeley, entitled “Over at Project Syndicate/Equitable Growth: What Do We Deserve Anyway?”  Dr. DeLong makes the very valid point, verified not only by anecdotal experience but years of economic research, that most human efforts, particularly economic ones, fail, and that the key determinants aren’t always–or do not seem in most cases–to be due to lack of talent, hard work, dedication, or any of the attributes that successful people like to credit for their success.

Instead, much of the economy, which in its present form is largely based on a tournament-like structure, allows only a small percentage of entrants to extract their marginal product from society in the form of extremely high levels of compensation.  The fact that these examples exist is much like a lottery, as the following quote from Dr. DeLong illustrates.

“If you win the lottery–and if the big prize in the lottery that is given to you is there in order to induce others to overestimate their chances and purchase lottery tickets and so enrich the lottery runner–do you “deserve” your winnings? It is not a win-win-win transaction: you are happy being paid, the lottery promoter is happy paying you, but the others who purchase lottery tickets are not happy–or, perhaps, would not be happy in their best selves if they understood what their chances really were and how your winning is finely-tuned to mislead them, for they do voluntarily buy the lottery tickets and you do have a choice.”  — Brad DeLong, Professor of Economics, U.C. Berkeley

So even though participants have a “choice,” it is one that is based on an intricately established system based on self-delusion.  It was about this time that I came across the excellent HBO Series “Silicon Valley.”  The tournament aspect of the software industry is apparent in the conferences and competitions for both customers and investors in which I have participated over the years.  In the end, luck and timing seem to play the biggest role in success (apart from having sufficient capital and reliable business partners).

I hope this parody ends my colleagues’ (and future techies’) claims to making the claim to “revolutionize” and “make the world a better place” through software.

I Can’t Get No (Satisfaction) — When Software Tools Go Bad

Another article I came across a couple of weeks ago that my schedule prevented me from highlighting was by Michelle Symonds at PM Hut entitled “5 Tell-Tale Signs That You Need a Better Project Management Tool.”  According to Ms. Symonds, among these signs are:

a.  Additional tools are needed to achieve the intended functionality apart from the core application;

b.  Technical support is poor or nonexistent;

c.  Personnel in the organization still rely on spreadsheets to extend the functionality of the application;

d.  Training on the tool takes more time than training the job;

e.  The software tool adds work instead of augmenting or facilitating the achievement of work.

I have seen situations where all of these conditions are at work but the response, in too many cases, has been “well we put so much money into XYZ tool with workarounds and ‘bolt-ons’ that it will be too expensive/disruptive to change.”  As we have advanced past the first phases of digitization of data, it seems that we are experiencing a period where older systems do not quite match up with current needs, but that software manufacturers are very good at making their products “sticky,” even when their upgrades and enhancements are window dressing at best.

In addition, the project management community, particularly focused on large projects in excess of $20M is facing the challenge of an increasingly older workforce.  Larger economic forces at play lately have exacerbated this condition.  Aggregate demand and, on the public side, austerity ideology combined with sequestration, has created a situation where highly qualified people are facing a job market characterized by relatively high unemployment, flat wages and salaries, depleted private retirement funds, and constant attacks on social insurance related to retirement.  Thus, people are hanging around longer, which limits opportunities for newer workers to grow into the discipline.  Given these conditions, we find that it is very risky to one’s employment prospects to suddenly forge a new path.  People in the industry that I have known for many years–and who were always the first to engage with new technologies and capabilities–are now very hesitant to do so now.  Some of this is well founded through experience and consists of healthy skepticism: we all have come across snake oil salesmen in our dealings at one time or another, and even the best products do not always make it due to external forces or the fact that brilliant technical people oftentimes are just not very good at business.

But these conditions also tend to hold back the ability of the enterprise to implement efficiencies and optimization measures that otherwise would be augmented and supported by appropriate technology.  Thus, in addition to those listed by Ms. Symonds, I would include the following criteria to use in making the decision to move to a better technology:

a.  Sunk and prospective costs.  Understand and apply the concepts of sunk cost and prospective cost.  The first is the cost that has been expended in the past, while the latter focuses on the investment necessary for future growth, efficiencies, productivity, and optimization.  Having made investments to improve a product in the past is not an argument for continuing to invest in the product in the future that trumps other factors.  Obviously, if the cash flow is not there an organization is going to be limited in the capital and other improvements it can make but, absent those considerations, sunk cost arguments are invalid.  It is important to invest in those future products that will facilitate the organization achieving its goals in the next five or ten years.

b.  Sustainability.  The effective life of the product must be understood, particularly as it applies to an organization’s needs.  Some of this overlaps the points made by Ms. Symonds in her article but is meant to apply in a more strategic way.  Every product, even software, has a limited productive life but my concept here goes to what Glen Alleman pointed out in his blog as “bounded applicability.”  Will the product require more effort in any form where the additional effort provides a diminishing return?  For example, I have seen cases where software manufacturers, in order to defend market share, make trivial enhancements such as adding a chart or graph in order to placate customer demands.  The reason for this should be, but is not always obvious.  Oftentimes more substantive changes cannot be made because the product was built on an earlier generation operating environment or structure.  Thus, in order to replicate the additional functionality found by newer products the application requires a complete rewrite.  All of us operating in this industry has seen this; where a product that has been a mainstay for many years begins to lose market share.  The decision, when it is finally made, is to totally reengineer the solution, but not as an upgrade to the original product, arguing that it is a “new” product.  This is true in terms of the effort necessary to keep the solution viable, but that then also completely undermines justifications based on sunk costs.

c.  Flexibility.  As stated previously in this blog, the first generation of digitization mimicked those functions that were previously performed manually.  The applications were also segmented and specialized based on traditional line and staff organizations, and specialties.  Thus, for project management, we have scheduling applications for the scheduling discipline (such as it is), earned value engines for the EV discipline, risk and technical performance applications for risk specialists and systems engineers, analytical software for project and program analysts, and financial management applications that subsumed project and program management financial management professionals.  This led to the deployment of so-called best-of-breed configurations, where a smorgasbord of applications or modules were acquired to meet the requirements of the organization.  Most often these applications had and have no direct compatibility, requiring entire staffs to reconcile data after the fact once that data was imported into a proprietary format in which it could be handled.  Even within so-called ERP environments under one company, direct compatibility at the appropriate level of the data being handled escaped the ability of the software manufacturers, requiring “bolt-ons” and other workarounds and third party solutions.  This condition undermines sustainability, adds a level of complexity that is hard to overcome, and adds a layer of cost to the life-cycle of the solutions being deployed.

The second wave to address some of these limitations focused on data flexibility using cubes, hard-coding of relational data and mapping, and data mining solutions: so-called Project Portfolio Management (PPM) and Business Intelligence (BI).  The problem is that, in the first instance, PPM simply another layer to address management concerns, while early BI systems froze in time single points of failure into hard-coded deployed solutions.

A flexible system is one that leverages the new advances in software operating environments to solve more than one problem.  This, of course, undermines the financial returns in software, where the pattern has been to build one solution to address one problem based on a specialty.  Such a system provides internal flexibility, that is, allows for the application of objects and conditional formatting without hardcoding, pushing what previously had to be accomplished by coders to the customer’s administrator or user level; and external flexibility, where the same application can address, say, EVM, schedule, risk, financial management, KPIs, technical performance, stakeholder reporting, all in the same or in multiple deployed environments without the need for hardcoding.  In this case the operating environment and any augmented code provides a flexible environment to the customer that allows one solution to displace multiple “best-of-breed” applications.

This flexibility should apply not only vertically but also horizontally, where data can be hierarchically organized to allow not only for drill-down, but also for roll-up.  Data in this environment is exposed discretely, providing to any particular user that data, aggregated as appropriate, based on their role, responsibility, or need to know.

d.  Interoperability and open compatibility.  A condition of the “best-of-breed” deployment environment is that it allows for sub-optimization to trump organizational goals.  The most recent example that I have seen of this is where it is obvious that the Integrated Master Schedule (IMS) and Performance Management Baseline (PMB) were obviously authored by different teams in different locations and, most likely, were at war with one another when they published these essential interdependent project management artifacts.

But in terms of sustainability, the absence of interoperability and open compatibility has created untenable situations.  In the example of PMB and IMS information above, in many cases a team of personnel must be engaged every month to reconcile the obvious disconnectedness of schedule activities to control accounts in order to ensure traceability in project management and performance.  Surely, not only should there be no economic rewards for such behavior, I believe that no business would perform in that manner without them.

Thus, interoperability in this case is to be able to deal with data in its native format without proprietary barriers that prevent its full use and exploitation to the needs and demands of the customer organization.  Software that places its customers in a corner and ties their hands in using their own business information has, indeed, worn out its welcome.

The reaction of customer organizations to the software industry’s attempts to bind them to proprietary solutions has been most marked in the public sector, and most prominently in the U.S. Department of Defense.  In the late 1990s the first wave was to ensure that performance management data centered around earned value was submitted in a non-proprietary format known as the ANSI X12 839 transaction set.  Since that time DoD has specified the use of the UN/CEFACT XML D09B standard for cost and schedule information, and it appears that other, previously stove-piped data will be included in that standard in the future.  This solution requires data transfer, but it is one that ensures that the underlying data can be normalized regardless of the underlying source application.  It is especially useful for stakeholder reporting situations or data sharing in prime and sub-contractor relationships.

It is also useful for pushing for improvement in the disciplines themselves, driving professionalism.  For example, in today’s project management environment, while the underlying architecture of earned value management and risk data is fairly standard, reflecting a cohesiveness of practice among its practitioners, schedule data tends to be disorganized with much variability in how common elements are kept and reported.  This also reflects much of the state of the scheduling discipline, where an almost “anything goes” mentality seems to be in play reflecting not so much the realities of scheduling practice–which are pretty well established and uniform–as opposed to the lack of knowledge and professionalism on the part of schedulers, who are tied to the limitations and vagaries of their scheduling application of choice.

But, more directly, interoperability also includes the ability to access data (as opposed to application interfacing, data mining, hard-coded Cubes, and data transfer) regardless of the underlying database, application, and structured data source.  Early attempts to achieve interoperability and open compatibility utilized ODBC but newer operating environments now leverage improved OLE DB and other enhanced methods.  This ability, properly designed, also allows for the deployment of transactional environments, in which two-way communication is possible.

A new reality.  Thus given these new capabilities, I think that we are entering a new phase in software design and deployment, where the role of the coder in controlling the UI is reduced.  In addition, given that the large software companies have continued to support a system that ties customers to proprietary solutions, I do not believe that the future of software is in open source as so many prognosticators stated just a few short years ago.  Instead, I propose that applications that behave like open source but allow those who innovate and provide maximum value, sustainability, flexibility, and interoperability to the customer are those that will be rewarded for their efforts.

Note:  This post was edited for clarity and grammatical errors from the original.

 

The Times They Are A-Changin’–Should PMI Be a Project Management Authority?

Back from a pretty intense three weeks taking care of customers (yes–I have those) and attending professional meetings and conferences.  Some interesting developments regarding the latter that I will be writing about here, but while I was in transit I did have the opportunity to keep up with some interesting discussions within the project management community.

Central among those was an article by Anonymous on PM Hut that appeared a few weeks ago that posited the opinion that PMI Should No Longer Be an Authority on Project Management.  I don’t know why the author of the post decided that they had to remain anonymous.  I learned some time ago that one should not only state their opinion in as forceful terms as possible (backed up with facts), but to own that opinion and be open to the possibility that it could be wrong or require modification.  As stated previously in my posts, project management in any form is not received wisdom.

The author of the post makes several assertions summarized below:

a. That PMI, though ostensibly a not-for-profit organization, behaves as a for-profit organization, and aggressively so.

b.  The Project Management Body of Knowledge (PMBOK®) fails in its goal of being the definitive source for project management because it lacks continuity between versions, its prescriptions lack realism, and, particularly in regard to software project management, that this section has morphed into a hybrid of Waterfall and Agile methodology.

c.  The PMI certifications lack credibility and seem to be geared to what will sell, as opposed to what can be established as a bonafide discipline.

I would have preferred that the author had provided more concrete examples of these assertions, given their severity.  For example, going to the on-line financial statements of the organization, PMI does have a significant staff of paid personnel and directors, with total assets as of 2012 of over $300M.  Of this, about $267M is in investments.  It’s total revenue that year was $173M.  It spent only $115M from its cashflow on its programs and another $4M on governance and executive management compensation.  Thus, it would appear that the non-profit basis of the organization has significantly deviated from its origins at the Georgia Institute of Technology.  Project management is indeed big business with vesting and compensation of over $1M going to the President & CEO of the organization in 2012 alone.  Thus there does seem to be more than a little justification for the first of the author’s criticisms.

I also share in the author’s other concerns, but a complete analysis is not available regarding either the true value of the PMBOK® and the value of a PMP certification.  I have met many colleagues who felt the need to obtain the latter, despite their significant practical achievements and academic credentials.  I have also met quite a few people with “PMP” after their names whose expertise is questionable, at best.  I am reminded of the certifications given by PMI and other PM organizations today to a very similar condition several years ago when the gold standard of credentials in certain parts of the IT profession were the Certified Novell Engineer (CNE), and Microsoft Certified Solutions Expert (MCSE) certifications.  They still exist in some form.  What was apparent as I took the courses and the examinations was that the majority of my fellow students had never set up a network.  They were, to use the pejorative among the more experienced members among us, “Paper CNEs and MCSEs.”  In interviewing personnel with “PMP” after their name I find a wide variation in expertise, thus the quality of experience with supporting education tends to have more influence with me than some credential from one of the PM organizations.

Related to this larger issue of what constitutes a proper credential in our discipline, I came across an announcement by Dave Gordon at his The Practicing IT Project Manager blog of a Project Management Job Requirements study.  Dave references this study by Noel Radley of SoftwareAdvise.com that states that the PMP is preferred or specified by 79% of the 300 jobs used as the representative baseline for the industries studied.  Interestingly, the study showed that advanced education is rarely required or preferred.

I suspect that this correlates in a negative way with many of the results that we have seen in the project management community.  Basic economics dictates that people with advanced degrees (M.A. and M.B.A. grads) do come with a higher price than those who only have Baccalaureate degrees, their incomes rising much more than 4 year college grads.  It seems that businesses do not value that additional investment except by exception.

Additionally, I have seen the results of two studies presented in government forums over the past six months (but alas no links yet) where the biggest risk to the project was identified to be the project manager.  Combined with the consistent failure reported by widely disparate sources of the overwhelming majority of projects to perform within budget and be delivered on time raises the natural question as to whether those that we choose to be project managers have the essential background to perform the job.

There seems to be a widely held myth that formal education is somehow unnecessary to develop a project manager–relegating what at least masquerades as a “profession”–to the level of a technician or mechanic.  It is not that we do not need technicians or mechanics, it is that higher level skills are needed to be a successful project manager.

This myth seems to be spreading, and to have originated from the society as a whole, where the emphasis is on basic skills, constant testing, the elimination of higher level thinking, and a narrowing of the curriculum.  Furthermore, college education, which was widely available to post-World War II generations well into the 1980s, is quickly becoming unaffordable by a larger segment of the population.  Thus, what we are seeing is a significant skills gap in the project management discipline to add to one that already has had an adverse impact on the ability of both government and industry to succeed.  For example, a paper from Calleam Consulting Ltd in a paper entitled “The Story Behind the High Failure Rates in the IT Sector” found that “17 percent of large IT projects go so badly that they can threaten the very existence of the company.”

From my experiences over the last 30+ years, when looking for a good CTO or CIO I will look to practical and technical experience and expertise with the ability to work with a team.  For an outstanding coder I look for a commitment to achieve results and elegance in the final product.  But for a good PM give me someone with a good liberal arts education with some graduate level business or systems work combined with leadership.  Leadership includes all of the positive traits one demands of this ability: honesty, integrity, ethical behavior, effective personnel management, commitment, and vision.

The wave of the future in developing our expertise in project management will be the ability to look at all of the performance characteristics of the project and its place in the organization.  This is what I see as the real meaning of “Integrated Project Management.”  I have attended several events since the beginning of the year focused on the project management discipline in which assertions were made that “EVM is the basis for integrated project management” or “risk is the basis for integrated project management” or “schedule is the basis for integrated project management.”  The speakers did not seem to acknowledge that the specialty that they were addressing is but one aspect of measuring project performance, and even less of a factor in measuring program performance.

I believe that this is a symptom of excess specialization and lack of a truly professional standard in project management.  I believe that if we continue to hire technicians with expertise in one area, possessing a general certification that simply requires one to attend conferences and sit in courses that lack educational accreditation and claim credit for “working within” a project, we will find that making the transition to the next evolutionary step at the PM level will be increasingly difficult.  Finally, for the anonymous author critical of PMI it seems that project management is a good business for those who make up credentials but not such a good deal for those with a financial stake in project management.

Note:  This post has been modified to correct minor grammatical and spelling errors.

Full disclosure:  The author has been a member of PMI for almost 20 years, and is a current member and former board member of the College of Performance Management (CPM).

Standing in the Shadow of (Deming) — How does Agile Stack Up? — Part One

I’ve read a few posts across the web over time in which Agile Cult proponents have tried to tie the Agile Manifesto as being on a continuum from Deming.  Given the #NoEstimates drive you would expect someone to cherry pick a portion of item 3 of his Fourteen Points of Management, that is, “Cease dependence on inspection to achieve quality,” omitting the remainder of the point: “Eliminate the need for inspection on a mass basis by building quality into the product in the first place.”  There are even a few that have attempted to appropriate Deming’s work by redefining the meaning of his systems approach and philosophy.  (A classic symptom of ideologues and cults, but more on that in a later post).

W. Edwards Deming was from all accounts a brilliant statistician.  In 1927 he met physicist and statistician Walter A. Shewhart of Bell Telephone Laboratories who is generally accepted as the originator of statistical quality control.  Shewhart’s work focused on processes and the related technical tool of the control chart.  One of his most important observations was his data presentation rules, which are:

1.  Data have no meaning apart from their context and,

2.  Data contain both signal and noise.  To be able to extract information, one must separate the signal from the noise within the data.

These concepts are extremely important in avoiding the fallacy of reification, in which to regard something abstract as a material or concrete thing.  This is a concept that I have come back to at various times in this blog, particularly as it relates to statistical performance measures such as EVM and risk measurement.  Shewhart’s work no doubt was influenced by the work of astronomer, statistician, mathematician, and philosopher Charles Sanders Peirce in this regard.

Another important feature of Shewharts approach was the development of the Shewhart or Deming (Plan-Do-Study-Act) cycle.  This is:

  • Plan: identify what can be improved and what change is needed
  • Do: implement the design change
  • Study: measure and analyze the process or outcome
  • Act: if the results are not as hoped for

Deming’s insight and contribution came when he realized that Shewhart’s methods not only applied to equipment performance but also to both manufacturing and management practices.  He further refined his methods by applying and training U.S. war industry personnel in Statistical Process Control (SPC) during the Second World War.  After the war he served under General MacArthur during the U.S. occupation and rebuilding of Japan as a census consultant to the Japanese government.  He then made his mark in assisting Japanese industry in applying his statistical and management methods to their rebuilding efforts.

To illustrate how different Deming was from Agile and #NoEstimates, it is useful to understand that the purpose of his methods, rooted in empirical methods (the accepted definition and not the appropriated ideological Agile definition), were focused on improving quality and reducing costs.  The formula for this approach is summed up as follows:

Quality = Results of Work Efforts/Total Costs

We have seen the validation of this formula with each generation of technological development, particularly in software and hardware development.  But in order to gauge success in order to insert quality into the process we must estimate, measure, inspect, and validate.  These are elements of the feedback loop that are essential in establishing a quality improvement process.

As Dave Gordon at the Practicing IT Project Manager blog stated in his post entitled Received Knowledge, Fanaticism, and Software Consultants, in the end, for all of the attempts at special pleading and to misdirect the assessment of risk in the system through Agile methods, it still all comes down to coding–and coding quality can be measured.  “But this doesn’t drive out fear!” can be heard in the reply by mediocre Agile coders, cherry picking Deming.  No, not if you’re not creating quality code in this particular example.  Human Resources Management (HRM) for good and bad is a leadership and management responsibility, and nowhere in a full reading of Deming does he eliminate these decisions given the denominator of cost in the equation of quality.

This is one aspect of IT coding that is slightly different from our other systems: software design and coding is as much as an art as a skill.  This is what we mean by elegance when we see well written code–it coheres as a simple systemic solution to the problem at hand that maximizes performance by leveraging the internal logic of the language being used and, as such, avoids defects not only in its current version but also is written in such a way that its internal logic, simplicity, and cohesion will allow for incremental improvement, and the avoidance and elimination of defects in future builds.

My first real life experience with this concept came during my first assignment as a project manager when I was a young Navy Lieutenant in San Diego.  I had just learned how to code software and had performed at the top of my class in pursuing a degree in software engineering.  This apparently qualified me in the eyes of my superiors to take over a failing program (read to mean: above cost and behind schedule) tasked with building a standard procurement system for the Navy (later the joint procurement system).  In reviewing the reasons for failure it became apparent that the army of systems analysts, technical consultants, and software developers were pulling the effort into conflicting directions.  My first act was to narrow the field to one lead developer and one very good coder, letting the rest to find other employment.  I then pared from there in building a cohesive team (within the realistic span of control) focused on quality and project success after we had defined what success would look like.  And, yes, we used estimates.  We recovered the program in six months until it was transferred to D.C. and placed under a more senior officer–the occasional price of success.

Neither Deming’s propositions nor Agile software development methodology are received knowledge or wisdom (see my last post on Carl Sagan), though the latter makes special pleading along those lines.  Both are open to questioning, validation, and testing.  This principle is implicit in Deming’s own System of Profound Knowledge regarding what he described as the Theory of Knowledge–an approach resting firmly within the principles of the scientific method.  Manifestos are beliefs, opinions, and assertions absent proof and, as such, many of the propositions of Agile are worlds away from Deming.

Note:  Minor edits made to correct grammatical errors in the original.