Another article I came across a couple of weeks ago that my schedule prevented me from highlighting was by Michelle Symonds at PM Hut entitled “5 Tell-Tale Signs That You Need a Better Project Management Tool.” According to Ms. Symonds, among these signs are:
a. Additional tools are needed to achieve the intended functionality apart from the core application;
b. Technical support is poor or nonexistent;
c. Personnel in the organization still rely on spreadsheets to extend the functionality of the application;
d. Training on the tool takes more time than training the job;
e. The software tool adds work instead of augmenting or facilitating the achievement of work.
I have seen situations where all of these conditions are at work but the response, in too many cases, has been “well we put so much money into XYZ tool with workarounds and ‘bolt-ons’ that it will be too expensive/disruptive to change.” As we have advanced past the first phases of digitization of data, it seems that we are experiencing a period where older systems do not quite match up with current needs, but that software manufacturers are very good at making their products “sticky,” even when their upgrades and enhancements are window dressing at best.
In addition, the project management community, particularly focused on large projects in excess of $20M is facing the challenge of an increasingly older workforce. Larger economic forces at play lately have exacerbated this condition. Aggregate demand and, on the public side, austerity ideology combined with sequestration, has created a situation where highly qualified people are facing a job market characterized by relatively high unemployment, flat wages and salaries, depleted private retirement funds, and constant attacks on social insurance related to retirement. Thus, people are hanging around longer, which limits opportunities for newer workers to grow into the discipline. Given these conditions, we find that it is very risky to one’s employment prospects to suddenly forge a new path. People in the industry that I have known for many years–and who were always the first to engage with new technologies and capabilities–are now very hesitant to do so now. Some of this is well founded through experience and consists of healthy skepticism: we all have come across snake oil salesmen in our dealings at one time or another, and even the best products do not always make it due to external forces or the fact that brilliant technical people oftentimes are just not very good at business.
But these conditions also tend to hold back the ability of the enterprise to implement efficiencies and optimization measures that otherwise would be augmented and supported by appropriate technology. Thus, in addition to those listed by Ms. Symonds, I would include the following criteria to use in making the decision to move to a better technology:
a. Sunk and prospective costs. Understand and apply the concepts of sunk cost and prospective cost. The first is the cost that has been expended in the past, while the latter focuses on the investment necessary for future growth, efficiencies, productivity, and optimization. Having made investments to improve a product in the past is not an argument for continuing to invest in the product in the future that trumps other factors. Obviously, if the cash flow is not there an organization is going to be limited in the capital and other improvements it can make but, absent those considerations, sunk cost arguments are invalid. It is important to invest in those future products that will facilitate the organization achieving its goals in the next five or ten years.
b. Sustainability. The effective life of the product must be understood, particularly as it applies to an organization’s needs. Some of this overlaps the points made by Ms. Symonds in her article but is meant to apply in a more strategic way. Every product, even software, has a limited productive life but my concept here goes to what Glen Alleman pointed out in his blog as “bounded applicability.” Will the product require more effort in any form where the additional effort provides a diminishing return? For example, I have seen cases where software manufacturers, in order to defend market share, make trivial enhancements such as adding a chart or graph in order to placate customer demands. The reason for this should be, but is not always obvious. Oftentimes more substantive changes cannot be made because the product was built on an earlier generation operating environment or structure. Thus, in order to replicate the additional functionality found by newer products the application requires a complete rewrite. All of us operating in this industry has seen this; where a product that has been a mainstay for many years begins to lose market share. The decision, when it is finally made, is to totally reengineer the solution, but not as an upgrade to the original product, arguing that it is a “new” product. This is true in terms of the effort necessary to keep the solution viable, but that then also completely undermines justifications based on sunk costs.
c. Flexibility. As stated previously in this blog, the first generation of digitization mimicked those functions that were previously performed manually. The applications were also segmented and specialized based on traditional line and staff organizations, and specialties. Thus, for project management, we have scheduling applications for the scheduling discipline (such as it is), earned value engines for the EV discipline, risk and technical performance applications for risk specialists and systems engineers, analytical software for project and program analysts, and financial management applications that subsumed project and program management financial management professionals. This led to the deployment of so-called best-of-breed configurations, where a smorgasbord of applications or modules were acquired to meet the requirements of the organization. Most often these applications had and have no direct compatibility, requiring entire staffs to reconcile data after the fact once that data was imported into a proprietary format in which it could be handled. Even within so-called ERP environments under one company, direct compatibility at the appropriate level of the data being handled escaped the ability of the software manufacturers, requiring “bolt-ons” and other workarounds and third party solutions. This condition undermines sustainability, adds a level of complexity that is hard to overcome, and adds a layer of cost to the life-cycle of the solutions being deployed.
The second wave to address some of these limitations focused on data flexibility using cubes, hard-coding of relational data and mapping, and data mining solutions: so-called Project Portfolio Management (PPM) and Business Intelligence (BI). The problem is that, in the first instance, PPM simply another layer to address management concerns, while early BI systems froze in time single points of failure into hard-coded deployed solutions.
A flexible system is one that leverages the new advances in software operating environments to solve more than one problem. This, of course, undermines the financial returns in software, where the pattern has been to build one solution to address one problem based on a specialty. Such a system provides internal flexibility, that is, allows for the application of objects and conditional formatting without hardcoding, pushing what previously had to be accomplished by coders to the customer’s administrator or user level; and external flexibility, where the same application can address, say, EVM, schedule, risk, financial management, KPIs, technical performance, stakeholder reporting, all in the same or in multiple deployed environments without the need for hardcoding. In this case the operating environment and any augmented code provides a flexible environment to the customer that allows one solution to displace multiple “best-of-breed” applications.
This flexibility should apply not only vertically but also horizontally, where data can be hierarchically organized to allow not only for drill-down, but also for roll-up. Data in this environment is exposed discretely, providing to any particular user that data, aggregated as appropriate, based on their role, responsibility, or need to know.
d. Interoperability and open compatibility. A condition of the “best-of-breed” deployment environment is that it allows for sub-optimization to trump organizational goals. The most recent example that I have seen of this is where it is obvious that the Integrated Master Schedule (IMS) and Performance Management Baseline (PMB) were obviously authored by different teams in different locations and, most likely, were at war with one another when they published these essential interdependent project management artifacts.
But in terms of sustainability, the absence of interoperability and open compatibility has created untenable situations. In the example of PMB and IMS information above, in many cases a team of personnel must be engaged every month to reconcile the obvious disconnectedness of schedule activities to control accounts in order to ensure traceability in project management and performance. Surely, not only should there be no economic rewards for such behavior, I believe that no business would perform in that manner without them.
Thus, interoperability in this case is to be able to deal with data in its native format without proprietary barriers that prevent its full use and exploitation to the needs and demands of the customer organization. Software that places its customers in a corner and ties their hands in using their own business information has, indeed, worn out its welcome.
The reaction of customer organizations to the software industry’s attempts to bind them to proprietary solutions has been most marked in the public sector, and most prominently in the U.S. Department of Defense. In the late 1990s the first wave was to ensure that performance management data centered around earned value was submitted in a non-proprietary format known as the ANSI X12 839 transaction set. Since that time DoD has specified the use of the UN/CEFACT XML D09B standard for cost and schedule information, and it appears that other, previously stove-piped data will be included in that standard in the future. This solution requires data transfer, but it is one that ensures that the underlying data can be normalized regardless of the underlying source application. It is especially useful for stakeholder reporting situations or data sharing in prime and sub-contractor relationships.
It is also useful for pushing for improvement in the disciplines themselves, driving professionalism. For example, in today’s project management environment, while the underlying architecture of earned value management and risk data is fairly standard, reflecting a cohesiveness of practice among its practitioners, schedule data tends to be disorganized with much variability in how common elements are kept and reported. This also reflects much of the state of the scheduling discipline, where an almost “anything goes” mentality seems to be in play reflecting not so much the realities of scheduling practice–which are pretty well established and uniform–as opposed to the lack of knowledge and professionalism on the part of schedulers, who are tied to the limitations and vagaries of their scheduling application of choice.
But, more directly, interoperability also includes the ability to access data (as opposed to application interfacing, data mining, hard-coded Cubes, and data transfer) regardless of the underlying database, application, and structured data source. Early attempts to achieve interoperability and open compatibility utilized ODBC but newer operating environments now leverage improved OLE DB and other enhanced methods. This ability, properly designed, also allows for the deployment of transactional environments, in which two-way communication is possible.
A new reality. Thus given these new capabilities, I think that we are entering a new phase in software design and deployment, where the role of the coder in controlling the UI is reduced. In addition, given that the large software companies have continued to support a system that ties customers to proprietary solutions, I do not believe that the future of software is in open source as so many prognosticators stated just a few short years ago. Instead, I propose that applications that behave like open source but allow those who innovate and provide maximum value, sustainability, flexibility, and interoperability to the customer are those that will be rewarded for their efforts.
Note: This post was edited for clarity and grammatical errors from the original.