(Data) Transformation–Fear and Loathing over ETL in Project Management

ETL stands for data extract, transform, and load. This essential step is the basis for all of the new capabilities that we wish to acquire during the next wave of information technology: business analytics, big(ger) data, interdisciplinary insight into processes that provide insights into improving productivity and efficiency.

I’ve been dealing with a good deal of fear and loading regarding the introduction of this concept, even though in my day job my organization is a leading practitioner in the field in its vertical. Some of this is due to disinformation by competitors in playing upon the fears of the non-technically minded–the expected reaction of those who can’t do in the last throws of avoiding irrelevance. Better to baffle them with bullshit than with brilliance, I guess.

But, more importantly, part of this is due to the state of ETL and how it is communicated to the project management and business community at large. There is a great deal to be gained here by muddying the waters even by those who know better and have the technology. So let’s begin by clearing things up and making this entire field a bit more coherent.

Let’s start with the basics. Any organization that contains the interaction of people is a system. For purposes of a project management team, a business enterprise, or a governmental body we deal with a special class of systems known as Complex Adaptive Systems: CAS for short. A CAS is a non-linear learning system that reacts and evolves to its environment. It is complex because of the inter-relationships and interactions of more than two agents in any particular portion of the system.

I was first introduced to the concept of CAS through readings published out of the Santa Fe Institute in New Mexico. Most noteworthy is the work The Quark and the Jaguar by the physicist Murray Gell-Mann. Gell-Mann is received the Nobel in physics in 1969 for his work on elementary particles, such as the quark, and is co-founder of the Institute. He also was part of the team that first developed simulated Monte Carlo analysis during a period he spent at RAND Corporation. Anyone interested in the basic science of quanta and how the universe works that then leads to insights into subjects such as day-to-day probability and risk should read this book. It is a good popular scientific publication written by a brilliant mind, but very relevant to the subjects we deal with in project management and information science.

Understanding that our organizations are CAS allows us to apply all sorts of tools to better understand them and their relationship to the world at large. From a more practical perspective, what are the risks involved in the enterprise in which we are engaged and what are the probabilities associated with any of the range of outcomes that we can label as success. For my purposes, the science of information theory is at the forefront of these tools. In this world an engineer by the name of Claude Shannon working at Bell Labs essentially invented the mathematical basis for everything that followed in the world of telecommunications, generating, interpreting, receiving, and understanding intelligence in communication, and the methods of processing information. Needless to say, computing is the main recipient of this theory.

Thus, all CAS process and react to information. The challenge for any entity that needs to survive and adapt in a continually changing universe is to ensure that the information that is being received is of high and relevant quality so that the appropriate adaptation can occur. There will be noise in the signals that we receive. What we are looking for from a practical perspective in information science are the regularities in the data so that we can make the transformation of receiving the message in a mathematical manner (where the message transmitted is received) into the definition of information quality that we find in the humanities. I believe that we will find that mathematical link eventually, but there is still a void there. A good discussion of this difference can be found here in the on-line publication Double Dialogues.

Regardless of this gap, the challenge of those of us who engage in the business of ETL must bring to the table the ability not only to ensure that the regularities in the information are identified and transmitted to the intended (or necessary) users, but also to distinguish the quality of the message in the terms of the purpose of the organization. Shannon’s equation is where we start, not where we end. Given this background, there are really two basic types of data that we begin with when we look at a set of data: structured and unstructured data.

Structured data are those where the qualitative information content is either predefined by its nature or by a tag of some sort. For example, schedule planning and performance data, regardless of the idiosyncratic/proprietary syntax used by a software publisher, describes the same phenomena regardless of the software application. There are only so many ways to identify snow–and, no, the Inuit people do not have 100 words to describe it. Qualifiers apply in the humanities, but usually our business processes more closely align with statistical and arithmetic measures. As a result, structured data is oftentimes defined by its position in a hierarchical, time-phased, or interrelated system that contains a series of markers, indexes, and tables that allow it to be interpreted easily through the identification of a Rosetta stone, even when the system, at first blush, appears to be opaque. When you go to a book, its title describes what it is. If its content has a table of contents and/or an index it is easy to find the information needed to perform the task at hand.

Unstructured data consists of the content of things like letters, e-mails, presentations, and other forms of data disconnected from its source systems and collected together in a flat repository. In this case the data must be mined to recreate what is not there: the title that describes the type of data, a table of contents, and an index.

All data requires initial scrubbing and pre-processing. The difference here is the means used to perform this operation. Let’s take the easy path first.

For project management–and most business systems–we most often encounter structured data. What this means is that by understanding and interpreting standard industry terminology, schemas, and APIs that the simple process of aligning data to be transformed and stored in a database for consumption can be reduced to a systemic and repeatable process without the redundancy of rediscovery applied in every instance. Our business intelligence and business analytics systems can be further developed to anticipate a probable question from a user so that the query is pre-structured to allow for near immediate response. Further, structuring the user interface in such as way as to make the response to the query meaningful, especially integrated with and juxtaposed other types of data requires subject matter expertise to be incorporated into the solution.

Structured ETL is the place that I most often inhabit as a provider of software solutions. These processes are both economical and relatively fast, particularly in those cases where they are applied to an otherwise inefficient system of best-of-breed applications that require data transfers and cross-validation prior to official reporting. Time, money, and effort are all saved by automating this process, improving not only processing time but also data accuracy and transparency.

In the case of unstructured data, however, the process can be a bit more complicated and there are many ways to skin this cat. The key here is that oftentimes what seems to be unstructured data is only so because of the lack of domain knowledge by the software publisher in its target vertical.

For example, I recently read a white paper published by a large BI/BA publisher regarding their approach to financial and accounting systems. My own experience as a business manager and Navy Supply Corps Officer provide me with the understanding that these systems are highly structured and regulated. Yet, business intelligence publishers treated this data–and blatantly advertised and apparently sold as state of the art–an unstructured approach to mining this data.

This approach, which was first developed back in the 1980s when we first encountered the challenge of data that exceeded our expertise at the time, requires a team of data scientists and coders to go through the labor- and time-consuming process of pre-processing and building specialized processes. The most basic form of this approach involves techniques such as frequency analysis, summarization, correlation, and data scrubbing. This last portion also involves labor-intensive techniques at the microeconomic level such as binning and other forms of manipulation.

This is where the fear and loathing comes into play. It is not as if all information systems do not perform these functions in some manner, it is that in structured data all of this work has been done and, oftentimes, is handled by the database system. But even here there is a better way.

My colleague, Dave Gordon, who has his own blog, will emphasize that the identification of probable questions and configuration of queries in advance combined with the application of standard APIs will garner good results in most cases. Yet, one must be prepared to receive a certain amount of irrelevant information. For example, the query on Google of “Fun Things To Do” that you may use if you are planning for a weekend will yield all sorts of results, such as “50 Fun Things to Do in an Elevator.”  This result includes making farting sounds. The link provides some others, some of which are pretty funny. In writing this blog post, a simple search on Google for “Google query fails” yields what can only be described as a large number of query fails. Furthermore, this approach relies on the data originator to have marked the data with pointers and tags.

Given these different approaches to unstructured data and the complexity involved, there is a decision process to apply:

1. Determine if the data is truly unstructured. If the data is derived from a structured database from an existing application or set of applications, then it is structured and will require domain expertise to inherit the values and information content without expending unnecessary resources and time. A structured, systemic, and repeatable process can then be applied. Oftentimes an industry schema or standard can be leveraged to ensure consistency and fidelity.

2. Determine whether only a portion of the unstructured data is relative to your business processes and use it to append and enrich the existing structured data that has been used to integrate and expand your capabilities. In most cases the identification of a Rosetta Stone and standard APIs can be used to achieve this result.

3. For the remainder, determine the value of mining the targeted category of unstructured data and perform a business case analysis.

Given the rapidly expanding size of data that we can access using the advancing power of new technology, we must be able to distinguish between doing what is necessary from doing what is impressive. The definition of Big Data has evolved over time because our hardware, storage, and database systems allow us to access increasingly larger datasets that ten years ago would have been unimaginable. What this means is that–initially–as we work through this process of discovery, we will be bombarded with a plethora of irrelevant statistical measures and so-called predictive analytics that will eventually prove out to not pass the “so-what” test. This process places the users in a state of information overload, and we often see this condition today. It also means that what took an army of data scientists and developers to do ten years ago takes a technologist with a laptop and some domain knowledge to perform today. This last can be taught.

The next necessary step, aside from applying the decision process above, is to force our information systems to advance their processing to provide more relevant intelligence that is visualized and configured to the domain expertise required. In this way we will eventually discover the paradox that effectively accessing larger sets of data will yield fewer, more relevant intelligence that can be translated into action.

At the end of the day the manager and user must understand the data. There is no magic in data transformation or data processing. Even with AI and machine learning it is still incumbent upon the people within the organization to be able to apply expertise, perspective, knowledge, and wisdom in the use of information and intelligence.

Money for Nothing — Project Performance Data and Efficiencies in Timeliness

I operate in a well regulated industry focused on project management. What this means practically is that there are data streams that flow from the R&D activities, recording planning and progress, via control and analytical systems to both management and customer. The contract type in most cases is Cost Plus, with cost and schedule risk often flowing to the customer in the form of cost overruns and schedule slippages.

Among the methodologies used to determine progress and project eventual outcomes is earned value management (EVM). Of course, this is not the only type of data that flows in performance management streams, but oftentimes EVM is used as shorthand to describe all of the data captured and submitted to customers in performance management. Other planning and performance management data includes time-phased scheduling of tasks and activities, cost and schedule risk assessments, and technical performance.

Previously in my critique regarding the differences between project monitoring and project management (before Hurricane Irma created some minor rearranging of my priorities), I pointed out that “looking in the rear view mirror” was often used as an excuse for by-passing unwelcome business intelligence. I followed this up with an intro to the synergistic economics of properly integrated data. In the first case I answered the critique demonstrating that it is based on an old concept that no longer applies. In the second case I surveyed the economics of data that drives efficiencies. In both cases, new technology is key to understanding the art of the possible.

As I have visited sites in both government and private industry, I find that old ways of doing things still persist. The reason for this is multivariate. First, technology is developing so quickly that there is fear that one’s job will be eliminated with the introduction of technology. Second, the methodology of change agents in introducing new technology often lacks proper socialization across the various centers of power that inevitably exist in any organization. Third, the proper foundation to clearly articulate the need for change is not made. This last is particularly important when stakeholders perform a non-rational assessment in their minds of cost-benefit. They see many downsides and cannot accept the benefits, even when they are obvious. For more on this and insight into other socioeconomic phenomena I strongly recommend Daniel Kahneman’s Thinking Fast and Slow. There are other reasons as well, but these are the ones that are most obvious when I speak with individuals in the field.

The Past is Prologue

For now I will restrict myself to the one benefit of new technology that addresses the “looking in the rear window” critique. It is important to do so because the critique is correct in application (for purposes that I will outline) if incorrect in its cause-and-effect. It is also important to focus on it because the critique is so ubiquitous.

As I indicated above, there are many sources of data in project management. They derive from the following systems (in brief):

a. The planning and scheduling applications, which measure performance through time in the form of discrete activities and events. In the most sophisticated implementations, these applications will include the assignment of resources, which requires the integration of these systems with resource management. Sometimes simple costs are also assigned and tracked through time as well.

b. The cost performance (earned value) applications, which ideally are aligned with the planning and scheduling applications, providing cross-integration with WBS and OBS structures, but focused on work accomplishment defined by the value of work completed against a baseline plan. These performance figures are tied to work accomplishment through expended effort collected by and, ideally, integrated with the financial management system. It involves the proper application of labor rates and resource expenditures in the accomplishment of the work to not only provide an statistical assessment of performance to date, but a projection of likely cost performance outcomes at completion of the effort.

c. Risk assessment applications which, depending of their sophistication and ease of use, provide analysis of possible cost and schedule outcomes, identify the sensitivity of particular activities and tasks, provide an assessment of alternative driving and critical paths, and apply different models of baseline performance to predict future outcomes.

d. Systems engineering applications that provide an assessment of technical performance to date and the likely achievement of technical parameters within the scope of the effort.

e. The financial management applications that provide an accounting of funds allocation, cash-flow, and expenditure, including planning information regarding expenditures under contract and planned expenditures in the future.

These are the core systems of record upon which performance information is derived. There are others as well, depending on the maturity of the project such as ERP systems and MRP systems. But for purposes of this post, we will bound the discussion to these standard sources of data.

In the near past, our ability to understand the significance of the data derived from these systems required manual processing. I am not referring to the sophistication of human computers of 1960s and before, dramatized to great effect in the uplifting movie Hidden Figures. Since we are dealing with business systems, these methodologies were based on simple business metrics and other statistical methods, including those that extended the concept of earned value management.

With the introduction of PCs in the workplace in the 1980s, desktop spreadsheet applications allowed this data to be entered, usually from printed reports. Each analyst not only used standard methods common in the discipline, but also developed their own methods to process and derive importance from the data, transforming it into information and useful intelligence.

Shortly after this development simple analytical applications were introduced to the market that allowed for pairing back the amount of data deriving from some of these systems and performing basic standard calculations, rendering redundant calculations unnecessary. Thus, for example, instead of a person having to calculate multiple estimates to complete, the application could perform those calculations as part of its functionality and deliver them to the analyst for use in, hopefully, their own more extensive assessments.

But even in this case, the data flow was limited to the EVM silo. The data streams relating to schedule, risk, SE, and FM were left to their own devices, oftentimes requiring manual methods or, in the best of cases, cut-and-paste, to incorporate data from reports derived from these systems. In the most extreme cases, for project oversight organizations, this caused analysts to acquire a multiplicity of individual applications (with the concomitant overhead and complexity of understanding differing lexicons and software application idiosyncrasies) in order to read proprietary data types from the various sources just to perform simple assessments of the data before even considering integrating it properly into the context of all of the other project performance data that was being collected.

The bottom line of outlining these processes is to note that, given a combination of manual and basic automated tools, that putting together and reporting on this data takes time, and time, as Mr. Benjamin Franklin noted, is money.

By itself the critique that “looking in the rear view mirror” has no value and attributing it to one particular type of information (EVM) is specious. After all, one must know where one has been and presently is before you can figure out where you need to go and how to get there and EVM is just one dimension of a multidimensional space.

But there is a utility value associated with the timing and locality of intelligence and that is the issue.

Contributors to time

Time when expended to produce something is a form of entropy. For purposes of this discussion at this level of existence, I am defining entropy as availability of the energy in a system to do work. The work in this case is the processing and transformation of data into information, and the further transformation of information into usable intelligence.

There are different levels and sub-levels when evaluating the data stream related to project management. These are:

a. Within the supplier/developer/manufacturer

(1) First tier personnel such as Control Account Managers, Schedulers (if separate), Systems Engineers, Financial Managers, and Procurement personnel among other actually recording and verifying the work accomplishment;

(2) Second tier personnel that includes various levels of management, either across teams or in typical line-and-staff organizations.

b. Within customer and oversight organizations

(1) Reporting and oversight personnel tasks with evaluating the fidelity of specific business systems;

(2) Counterpart project or program officer personnel tasked with evaluating progress, risk, and any factors related to scope execution;

(3) Staff organizations designed to supplement and organize the individual project teams, providing a portfolio perspective to project management issues that may be affected by other factors outside of the individual project ecosystem;

(4) Senior management at various levels of the organization.

Given the multiplicity of data streams it appears that the issue of economies is vast until it is understood that the data that underlies the consumers of the information is highly structured and specific to each of the domains and sub-domains. Thus there are several opportunities for economies.

For example, cost performance and scheduling data have a direct correlation and are closely tied. Thus, these separate streams in the A&D industry were combined under a common schema, first using the UN/CEFACT XML, and now transitioning to a more streamlined JSON schema. Financial management has gone through a similar transition. Risk and SE data are partially incorporated into project performance schemas, but the data is also highly structured and possesses commonalities to be directly accessed using technologies that effectively leverage APIs.

Back to the Future

The current state, despite advances in the data formats that allow for easy rationalization and normalization of data that breaks through propriety barriers, still largely is based a slightly modified model of using a combination of manual processing augmented by domain-specific analytical tools. (Actually sub-domain analytical tools that support sub-optimization of data that are a barrier to incorporation of cross-domain integration necessary to create credible project intelligence).

Thus, it is not unusual at the customer level to see project teams still accepting a combination of proprietary files, hard copy reports, and standard schema reports. Usually the data in these sources is manually entered into Excel spreadsheets or a combination of Excel and some domain-specific analytical tool (and oftentimes several sub-specialty analytical tools). After processing, the data is oftentimes exported or built in PowerPoint in the form of graphs or standard reporting formats. This is information management by Excel and PowerPoint.

In sum, in all too many cases the project management domain, in terms of data and business intelligence, continues to party like it is 1995. This condition also fosters and reinforces insular organizational domains, as if the project team is disconnected from and can possess goals antithetical and/or in opposition to the efficient operation of the larger organization.

A typical timeline goes like this:

a. Supplier provides project performance data 15-30 days after the close of a period. (Some contract clauses give more time). Let’s say the period closed at the end of July. We are now effectively in late August or early September.

b. Analysts incorporate stove-piped domain data into their Excel spreadsheets and other systems another week or so after submittal.

c. Analysts complete processing and analyzing data and submit in standard reporting formats (Excel and PowerPoint) for program review four to six weeks after incorporation of the data.

Items a through c now put a typical project office at project review for July information at the end of September or beginning of October. Furthermore, this information is focused on individual domains, and given the lack of cross-domain knowledge, can be contradictory.

This system is broken.

Even suppliers who have direct access to systems of record all too often rely on domain-specific solutions to be able to derive significance from the processing of project management data. The larger suppliers seem to have recognized this problem and have been moving to address it, requiring greater integration across solutions. But the existence of a 15-30 day reconciliation period after the end of a period, and formalized in contract clauses, is indicative of an opportunity for greater efficiency in that process as well.

The Way Forward

But there is another way.

The opportunities for economy in the form of improvements in time and effort are in the following areas, given the application of the right technology:

  1. In the submission of data, especially by finding data commonalities and combining previously separate domain data streams to satisfy multiple customers;
  2. In retrieving all data so that it is easily accessible to the organization at the level of detailed required by the task at hand;
  3. In processing this data so that it can converted by the analyst into usable intelligence;
  4. In properly accessing, displaying, and reporting properly integrated data across domains, as appropriate, to each level of the organization regardless of originating data stream.

Furthermore, there opportunities to realizing business value by improving these processes:

  1. By extending expertise beyond a limited number of people who tend to monopolize innovations;
  2. By improving organizational knowledge by incorporating innovation into the common system;
  3. By gaining greater insight into more reliable predictors of project performance across domains instead of the “traditional” domain-specific indices that have marginal utility;
  4. By developing a project focused organization that breaks down domain-centric thinking;
  5. By developing a culture that ties cross-domain project knowledge to larger picture metrics that will determine the health of the overarching organization.

It is interesting that when I visit the field how often it is asserted that “the technology doesn’t matter, it’s process that matters”.

Wrong. Technology defines the art of the possible. There is no doubt that in an ideal world we would optimize our systems prior to the introduction of new technology. But that assumes that the most effective organization (MEO) is achievable without technological improvements to drive the change. If one cannot efficiently integrate all submitted cross-domain information effectively and efficiently using Excel in any scenario (after all, it’s a lot of data), then the key is the introduction of new technology that can do that very thing.

So what technologies will achieve efficiency in the use of this data? Let’s go through the usual suspects:

a. Will more effective use of PowerPoint reduce these timelines? No.

b. Will a more robust set of Excel workbooks reduce these timelines? No.

c. Will an updated form of a domain-specific analytical tool reduce these timelines? No.

d. Will a NoSQL solution reduce these timelines? Yes, given that we can afford the customization.

e. Will a COTS BI application that accepts a combination of common schemas and APIs reduce these timelines? Yes.

The technological solution must be fitted to its purpose and time. Technology matters because we cannot avoid the expenditure of time or energy (entropy) in the processing of information. We can perform these operations using a large amount of energy in the form of time and effort, or we can conserve time and effort by substituting the power of computing and information processing. While we will never get to the point where we completely eliminate entropy, our application of appropriate technology makes it seem as if effort in the form of time is significantly reduced. It’s not quite money for nothing, but it’s as close as we can come and is an obvious area of improvement that can be made for a relatively small investment.

All Along the Watch Tower — Project Monitoring vs. Project Management

My two month summer blogging hiatus has come to a close. Along the way I have gathered a good bit of practical knowledge related to introducing and implementing process and technological improvements into complex project management environments. More specifically, my experience is in introducing new adaptive technologies that support the integration of essential data across the project environment–integrated project management in short–and do so by focusing on knowledge discovery in databases (KDD).

An issue that arose during these various opportunities reminded me of the commercial where a group of armed bank robbers enter a bank and have everyone lay on the floor. One of the victims whispers to a uniformed security officer, “Hey, do something!” The security officer replies, “Oh, I’m not a security guard, I’m a security monitor. I only notify people if there is a robbery.” He looks to the robbers who have a hostage and then turns back to the victim and says calmly, “There’s a robbery.”

We oftentimes face the same issues in providing project management solutions. New technologies have expanded the depth and breadth of information that is available to project management professionals. Oftentimes the implementation of these solutions get to the heart as to whether people considers themselves project managers or project monitors.

Technology, Information, and Cognitive Dissonance

This perceptual conflict oftentimes plays itself out in resistance to change in automated systems. In today’s world the question of acceptance is a bit different than when I first provided automated solutions into organizations more than 30 years ago. At that time, which represented the first modern wave of digitization, focused on simply automating previously manual functions that supported existing line-and-staff organizations. Software solutions were constructed to fit into the architecture of the social or business systems being served, regardless of whether those systems were inefficient or sub-optimal.

The challenge is a bit different today. Oftentimes new technology is paired with process changes that will transform an organization–and quite often is used as the leading edge in that initiative. The impact on work is transformative, shifting the way that the job and the system itself is perceived given the new information.

Leon Festinger in his work A Theory of Cognitive Dissonance (1957) stated that people seek psychological consistency in order to function in the real world. When faced with information or a situation that is contradictory to consistency, individuals will experience psychological discomfort. The individual can then simply adapt to the new condition by either accepting the change, adding rationalizations to connect their present perceptions to the change, or to challenge the change–either by attacking it as valid, by rejecting its conclusions, or by avoidance.

The most problematic of the reactions that can be encountered in IT project management are the last two. When I have introduced a new technology paired with process change this manifestation has usually been justified by the refrains that:

a. The new solution is too hard to understand;

b. The new solution is too detailed;

c. The new solution is too different from the incumbent technology;

d. The solution is unrelated to “my job of printing out one PowerPoint chart”;

e. “Why can’t I just continue to use my own Excel workbooks/Access database/solution”;

f. “Earned value/schedule/risk management/(add PM methodology here) doesn’t tell me what I don’t already know/looks in the rear view mirror/doesn’t add enough value/is too expensive/etc.”

For someone new to this kind of process the objections often seem daunting. But some perspective always helps. To date, I have introduced and implemented three waves of technology over the course of my career and all initially encountered resistance, only to eventually be embraced. In a paradoxical twist (some would call it divine justice, karma, or universal irony), oftentimes the previous technology I championed, which sits as the incumbent, is used as a defense against the latest innovation.

A reasonable and diligent person involved in the implementation of any technology which, after all, is also project management, must learn to monitor conditions to determine if there is good reason for resistance, or if it is a typical reaction to relatively rapid change in a traditionally static environment. The point, of course, is not only to meet organizational needs, but to achieve a high level of acceptance in software deployment–thus maximizing ROI for the organization and improving organizational effectiveness.

If process improvement is involved, an effective pairing and coordination with stakeholders is important. But such objections, while oftentimes a reaction to people receiving information they prefer not to have, are ignored at one’s own peril. This is where such change processes require both an analytical and leadership-based approach.

Technology and Cultural Change – Spock vs. Kirk

In looking at resistance one must determine whether the issue is one of technology or some reason of culture or management. Testing the intuitiveness of the UI, for example, is best accomplished by beta testing among SMEs. Clock speeds latency, reliability, accuracy, and fidelity in data, and other technological characteristics are easily measured and documented. This is the Mr. Spock side of the equation, where, in an ideal world, rationality and logic should lead one to success. Once these processes are successfully completed, however, the job is still not done.

Every successful deployment still contains within it pockets of resistance. This is the emotional part of technological innovation that oftentimes is either ignored or that managers hope to paper or plow over, usually to their sorrow. It is here that we need to focus our attention. This is the Captain Kirk part of the equation.

The most vulnerable portion of an IT project deployment happens within the initial period of inception. Rolling wave implementations that achieve quick success will often find that there is more resistance over time as each new portion of the organization is brought into the fold. There are many reasons for this.

New personnel may be going by what they observed from the initial embrace of the technology and not like the results. Perhaps buy-in was not obtained by the next group prior to their inclusion, or senior management is not fully on-board. Perhaps there is a perceived or real fear of job loss, or job transformation that was not socialized in advance. It is possible that the implementation focused too heavily on the needs of the initial group of personnel brought under the new technology, which caused the technology to lag in addressing the needs of the next wave. It could also be that the technology is sufficiently different as to represent a “culture shock”, which causes an immediate defensive reaction. If there are outsourced positions, the subcontractor may feel that its interests are threatened by the introduction of the technology. Some SMEs, having created “irreplaceable asset” barriers, may feel that their position would be eroded if they were to have to share expertise and information with other areas of the organization. Lower level employees fear that management will have unfettered access to information prior to vetting. The technology may be oversold as a panacea, rather as a means of addressing organizational or information management deficiencies. All of these reasons, and others, are motivations to explore.

There is an extensive literature on the ways to address the concerns listed above, and others. Good examples can be found here and here.

Adaptive COTS or Business Intelligence technologies, as well as rapid response teams based on Agile, go a long way in addressing and handling barriers to acceptance on the technology side. But additional efforts at socialization and senior management buy-in are essential and will be the difference maker. No amount of argumentation or will persuade people otherwise inclined to defend the status quo, even when benefits are self-evident. Leadership by information consumers–both internal and external–as well as decision-makers will win the day.

Process and Technology – Integrated Project Management and Big(ger) Data

The first wave of automation digitized simple manual efforts (word processing, charts, graphs). This resulted in an incremental increase in productivity but, more importantly, it shifted work so that administrative overhead was eliminated. There are no secretarial pools or positions as there were when I first entered the workforce.

The second and succeeding waves tackled transactional systems based on line and staff organizational structures, and work definitions. Thus, in project management, EVM systems were designed for cost analysts, scheduling apps for planners and schedulers, risk analysis software for systems engineers, and so on.

All of these waves had a focus on functionality of hard-coded software solutions. The software determined what data was important and what information could be processed from it.

The new paradigm shift is a focus on data. We see this through the buzz phrase “Big Data”.  But what does that mean? It means that all of the data that the organization or enterprise collects has information value. Deriving that information value, and then determining its relevance and whether it provides actionable intelligence, is of importance to the organization.

Thus, implementations of data-focused solutions represent not only a shift in the way that work is performed, but also how information is used, and how the health and performance of the organization is assessed. Horizontal information integration across domains provides insights that were not apparent in the past when data was served to satisfy the needs of specialized domains and SMEs. New vulnerabilities and risks are uncovered through integration. This is particularly clear when implementing integrated project management (IPM) solutions.

A pause in providing a definition is in order, especially since IPM is gaining traction, and so large lazy and entrenched incumbents adjust their marketing in the hope of muddying the waters to fit their square peg focused and hard-coded solutions into the round hole of flexible IPM solutions.

Integrated Project Management are the processes and integration of information necessary to derive actionable intelligence from all of the relevant cross-domain information involved in the project organization. This includes cost performance, schedule performance, financial performance and execution, contract implementation, milestone achievement, resource management, and technical performance. Actionable intelligence is that information that is relevant to the project decision-making authority which effectively identifies specific probable qualitative and quantitative risks, risk impact, and risk handling necessary to make project trade-offs, project re-baselining or re-scope, cost-as-an-independent variable (CAIV), or project cancellation decisions. Underlying all of this are feedback loop systems assessments to ensure that there is integrity and fidelity in our business systems–both human and digital.

No doubt, we have a ways to go to get to this condition, but organizations are getting there. What it will take is a change the way leadership views its role, in rewriting traditional project management job descriptions, cross-domain training and mentoring, and in enforcing both for ourselves and in others the dedication to the ethics that are necessary to do the job.

Practice and Ethics in Project Management within Public Administration

The final aspect of implementations of project management systems that is often overlooked, and which oftentimes frames the environment that we are attempting to transform, concerns ethical behavior in project management. It is an aspect of project success as necessary as any performance metric, and it is one for which leadership within an organization sets the tone.

My own expertise in project management has concerned itself in most cases with project management in the field of public administration, though as a businessman I also have experience in the commercial world. Let’s take public administration first since, I think, it is the most straightforward.

When I wore a uniform as a commissioned Naval officer I realized that in my position and duties that I was merely an instrument of the U.S. Navy, and its constitutional and legal underpinnings. My own interests were separate from, and needed to be firewalled from, the execution of my official duties. When I have observed deficiencies in the behavior of others in similar positions, this is the dichotomy that often fails to be inculcated in the individual.

When enlisted personnel salute a commissioned officer they are not saluting the person, they are saluting and showing respect to the rank and position. The officer must earn respect as an individual. Having risen from the enlisted ranks, these were the aspects of leadership that were driven home to me in observing this dynamic: in order to become a good leader, one must first have been a good follower; you must demonstrate trust and respect to earn trust and respect. One must act ethically.

Oftentimes officials in other governmental entities–elected officials (especially), judges, and law enforcement–often fail to understand this point and hence fail this very basic rule of public behavior. The law and their position deserves respect. The behavior and actions of the individuals in their office will determine whether they personally should be shown respect. If an individual abuses their position or the exercise of discretion, they are not worthy of respect, with the danger that they will delegitimize and bring discredit to the office or position.

But earning respect is only one aspect of this understanding in ethical behavior in public administration. It also means that one will make decisions based on the law, ethical principles, and public policy regardless of whether one personally agrees or disagrees with the resulting conclusion of those criteria. That an individual will also apply a similar criteria whether or not the decision will adversely impact their own personal interests or those of associates, friends, or family is also part of weight of ethical behavior.

Finally, in applying the ethical test rule, one must also accept responsibility and accountability in executing one’s duties. This means being diligent, constantly striving for excellence and improvement, leading by example, and to always represent the public interest. Note that ego, personal preference, opinion, or bias, self-interest, or other such concerns have no place in the ethical exercise of public administration.

So what does that mean for project management? The answer goes to the heart of whether one views himself or herself as a project manager or project monitor. In public administration the program manager has a unique set of responsibilities tied to the acquisition of technologies that is rarely replicated in private industry. Oftentimes this involves shepherding a complex effort via contractual agreements that involve large specialized businesses–and often a number of subcontractors–across several years of research and development before a final product is ready for production and deployment.

The primary role in this case is to ensure that the effort is making progress and executing the program toward the goal, ensuring accountability of the funds being expended, which were appropriated for the specific effort by Congress, to ensure that the effort intended by those expenditures through the contractual agreements are in compliance, to identify and handle risks that may manifest to bring the effort into line with the cost, schedule, and technical baselines, all the while staying within the program’s framing assumptions. In addition, the program manager must coordinate with operational managers who are anticipating the deployment of the end item being developed, manage expectations, and determine how best to plan for sustainability once the effort goes to production and deployment. This is, of course, a brief summary of the extensive duties involved.

Meeting these responsibilities requires diligence, information that provides actionable intelligence, and a great deal of subject matter expertise. Finding and handling risks, determining if the baseline is executable, maintaining the integrity of the effort–all require leadership and skill. This is known as project management.

Project monitoring, by contrast, is acceptance of information provided by self-interested parties without verification, of limiting the consumption and processing of essential project performance information, of demurring to any information of a negative nature regarding project performance or risk, of settling for less than an optimal management environment, and using these tactics to, euphemistically, kick the ball down the court to the next project manager in the hope that the impact of negligence falls on someone else’s watch. Project monitoring is unethical behavior in public administration.

Practice and Ethics in Project Management within Private Industry

The focus in private industry is a bit different since self-interest abounds and is rewarded. But there are ethical rules that apply, and which a business person in project management would be well-served to apply.

The responsibility of the executive or officers in a business is to the uphold the interests of the enterprise’s customers, its employees, and its shareholders. Oftentimes business owners will place unequal weighting to these interests, but the best businesses view these responsibilities as being in fine balance.

For example, aside from the legal issues, ethics demands that in making a commitment in providing supplies and services there are a host of obligations that go along with that transaction–honest representation, warranty, and a commitment to provide what was promised. For employees, the commitments made regarding the conditions of employment and to reward employees appropriately for their contribution to the enterprise. For stockholders it is to conduct the business in such as way as to avoid placing its fiduciary position and its ability to act as a going concern in avoidable danger.

For project managers the responsibility within these ethical constraints is to honestly assess and communicate to the enterprise’s officers project performance, whether the effort will achieve the desired qualitative results within budgetary and time constraints, and, from a private industry perspective, handle most of the issues articulated for the project manager in the section on public administration above. The customer is different in this scenario, oftentimes internal, especially when eliminating companies that serve the project management verticals in public administration. Oftentimes the issues and supporting systems are less complex because the scale is, on the whole, smaller.

There are exceptions, of course, to the issue of scaling. Some construction, shipbuilding, and energy projects approach the complexity of some public sector programs. Space X and other efforts are other examples. But the focus there is financial from the perspective of the profit motive–not from the perspective of meeting the goals of some public interest involving health, safety, or welfare, and so the measures of measurement will be different, though the need for accountability and diligence is no less urgent. In may ways such behavior is more urgent given that failure may result in the failure of the entire enterprise.

Yet, the basic issue is the same: are you a project manager or a project monitor? Diligence, leadership, and ethical behavior (which is essential to leadership) are the keys. Project monitoring most often results in failure, and with good reason. It is a failure of both practice and ethics.

New Directions — Fourth Generation apps, Agile, and the New Paradigm

The world is moving forward and Moore’s Law is accelerating in interesting ways on the technology side, which opens new opportunities, especially in software.  In the past I have spoken of the flexibility of Fourth Generation software, that is, software that doesn’t rely on structured hardcoding, but instead, is focused on the data to deliver information to the user in more interesting and essential ways.  I work in this area for my day job, and so using such technology has tipped over more than a few rice bowls.

The response from entrenched incumbents and those using similar technological approaches in the industry focused on “tools” capabilities has been to declare vices as virtues.  Hard-coded applications that require long-term development and structures, built on proprietary file and data structures are, they declare, the right way to do things.  “We provide value by independently developing IP based on customer requirements,” they declare.  It sounds very reasonable, doesn’t it?  Only one problem: you have to wait–oh–a year or two to get that chart or graph you need, to refresh that user interface, to expand functionality, and you will almost never be able to leverage the latest capabilities afforded by the doubling of computing capability every 12 to 24 months.  The industry is filled with outmoded, poorly supported, and obsolete “tools’ already.  Guess it’s time for a new one.

The motivation behind such assertions, of course, is to slow things down.  Not possessing the underlying technology to provide more, better, and more powerful functionality to the customer quicker and more flexibly based on open systems principles, that is, dealing with data in an agnostic manner, they use their position to try to hold up disruptive entries from leaving them far behind.  This is done, especially in the bureaucratic complexities of A&D and DoD project management, through professional organizations that are used as thinly disguised lobbying opportunities by software suppliers such as the NDIA, or by appeals to contracting rules that they hope will undermine the introduction of new technologies.

All of these efforts, of course, are blowing into the wind.  The economics of the new technologies is too compelling for anyone to last long in their job by partying like it’s still 1997 under the first wave of software solutions targeted at data silos and stove-piped specialization.

The new paradigm is built on Agile and those technologies that facilitate that approach.  In case my regular readers think that I have become one of the Cultists, bowing before the Manfesto That May Not Be Named, let me assure you that is not the case.  The best articulation of Agile that I have read recently comes from Neil Killick, whom I have expressed some disagreement on the #NoEstimates debate and the more cultish aspects of Agile in past posts, but who published an excellent post back in July entitled “12 questions to find out: Are you doing Agile Software Development?”

Here are Neil’s questions:

  1. Do you want to do Agile Software Development? Yes – go to 2. No – GOODBYE.
  2. Is your team regularly reflecting on how to improve? Yes – go to 3. No – regularly meet with your team to reflect on how to improve, go to 2.
  3. Can you deliver shippable software frequently, at least every 2 weeks? Yes – go to 4. No – remove impediments to delivering a shippable increment every 2 weeks, go to 3.
  4. Do you work daily with your customer? Yes – go to 5. No – start working daily with your customer, go to 4.
  5. Do you consistently satisfy your customer? Yes – go to 6. No – find out why your customer isn’t happy, fix it, go to 5.
  6. Do you feel motivated? Yes – go to 7. No – work for someone who trusts and supports you, go to 2.
  7. Do you talk with your team and stakeholders every day? Yes – go to 8. No – start talking with your team and stakeholders every day, go to 7.
  8. Do you primarily measure progress with working software? Yes – go to 9. No – start measuring progress with working software, go to 8.
  9. Can you maintain pace of development indefinitely? Yes – go to 10. No – take on fewer things in next iteration, go to 9.
  10. Are you paying continuous attention to technical excellence and good design? Yes – go to 11. No – start paying continuous attention to technical excellent and good design, go to 10.
  11. Are you keeping things simple and maximising the amount of work not done? Yes – go to 12. No – start keeping things simple and writing as little code as possible to satisfy the customer, go to 11.
  12. Is your team self-organising? Yes – YOU’RE DOING AGILE SOFTWARE DEVELOPMENT!! No – don’t assign tasks to people and let the team figure out together how best to satisfy the customer, go to 12.

Note that even in software development based on Agile you are still “provid(ing) value by independently developing IP based on customer requirements.”  Only you are doing it faster and more effectively.

Now imagine a software technology that is agnostic to the source of data, that does not require a staff of data scientists, development personnel, and SMEs to care and feed it; that allows multiple solutions to be released from the same technology; that allows for integration and cross-data convergence to gain new insights based on Knowledge Discovery in Databases (KDD) principles; and that provides shippable, incremental solutions every two weeks or as often as can be absorbed by the organization, but responsively enough to meet multiple needs of the organization at any one time.

This is what is known as disruptive value.  There is no stopping this train.  It is the new paradigm and it’s time to take advantage of the powerful improvements in productivity, organizational effectiveness, and predictive capabilities that it provides.  This is the power of technology combined with a new approach to “small” big data, or structured data, that is effectively normalized and rationalized to the point of breaking down proprietary barriers, hewing to the true meaning of making data–and therefore information–both open and accessible.

Furthermore, such solutions using the same data streams produced by the measurement of work can also be used to evaluate organizational and systems compliance (where necessary), and effectiveness.  Combined with an effective feedback mechanism, data and technology drive organizational improvement and change.  There is no need for another tool to layer with the multiplicity of others, with its attendant specialized training, maintenance, and dead-end proprietary idiosyncrasies.  On the contrary, such an approach is an impediment to data maximization and value.

Vices are still vices even in new clothing.  Time to come to the side of the virtues.

Do You Believe in Magic? — Big Data, Buzz Phrases, and Keeping Feet Planted Firmly on the Ground

My alternative title for this post was “Money for Nothing,” which is along the same lines.  I have been engaged in discussions regarding Big Data, which has become a bit of a buzz phrase of late in both business and government.  Under the current drive to maximize the value of existing data, every data source, stream, lake, and repository (and the list goes on) has been subsumed by this concept.  So, at the risk of being a killjoy, let me point out that not all large collections of data is “Big Data.”  Furthermore, once a category of data gets tagged as Big Data, the further one seems to depart from the world of reality in determining how to approach and use the data.  So for of you who find yourself in this situation, let’s take a collective deep breath and engage our critical thinking skills.

So what exactly is Big Data?  Quite simply, as noted by this article in Forbes by Gil Press, term is a relative one, but generally means from a McKinsey study, “datasets whose size is beyond the ability of typical database software tools to capture, store, manage, and analyze.”  This subjective definition is a purposeful one, since Moore’s Law tends to change what is viewed as simply digital data as opposed to big data.  I would add some characteristics to assist in defining the term based on present challenges.  Big data at first approach tends to be unstructured, variable in format, and does not adhere to a schema.  Thus, not only is size a criteria for the definition, but also the chaotic nature of the data that makes it hard to approach.  For once we find a standard means of normalizing, rationalizing, or converting digital data, it no longer is beyond the ability of standard database tools to effectively use it.  Furthermore, the very process of taming it thereby renders it non-big data, or perhaps, if a exceedingly large dataset, perhaps “small big data.”

Thus, having defined our terms and the attributes of the challenge we are engaging, we now can eliminate many of the suppositions that are floating around in organizations.  For example, there is a meme that I have come upon that asserts that disparate application file data can simply be broken down into its elements and placed into database tables for easy access by analytical solutions to derive useful metrics.  This is true in some ways but both wrong and dangerous in its apparent simplicity.  For there are many steps missing in this process.

Let’s take, for example, the least complex example in the use of structured data submitted as proprietary files.  On its surface this is an easy challenge to solve.  Once someone begins breaking the data into its constituent parts, however, greater complexity is found, since the indexing inherent to data interrelationships and structures are necessary for its effective use.  Furthermore, there will be corruption and non-standard use of user-defined and custom fields, especially in data that has not undergone domain scrutiny.  The originating third-party software is pre-wired to be able to extract this data properly.  Absent having to use and learn multiple proprietary applications with their concomitant idiosyncrasies, issues of sustainability, and overhead, such a multivariate approach defeats the goal of establishing a data repository in the first place by keeping the data in silos, preventing integration.  The indexing across, say, financial systems or planning systems are different.  So how do we solve this issue?

In approaching big data, or small big data, or datasets from disparate sources, the core concept in realizing return on investment and finding new insights, is known as Knowledge Discovery in Databases or KDD.  This was all the rage about 20 years ago, but its tenets are solid and proven and have evolved with advances in technology.  Back then, the means of extracting KDD from existing databases was the use of data mining.

The necessary first step in the data mining approach is pre-processing of data.  That is, once you get the data into tables it is all flat.  Every piece of data is the same–it is all noise.  We must add significance and structure to that data.  Keep in mind that we live in this universe, so there is a cost to every effort known as entropy.  Computing is as close as you’ll get to defeating entropy, but only because it has shifted the burden somewhere else.  For large datasets it is pushed to pre-processing, either manual or automated.  In the brute force world of data mining, we hire data scientists to pre-process the data, find commonalities, and index it.  So let’s review this “automated” process.  We take a lot of data and then add a labor-intensive manual effort to it in order to derive KDD.  Hmmm..  There may be ROI there, or there may not be.

But twenty years is a long time and we do have alternatives, especially in using Fourth Generation software that is focused on data usage without the limitations of hard-coded “tools.”  These alternatives apply when using data on existing databases, even disparate databases, or file data structured under a schema with well-defined data exchange instructions that allow for a consistent manner of posting that data to database tables. The approach in this case is to use APIs.  The API, like OLE DB or the older ODBC, can be used to read and leverage the relative indexing of the data.  It will still require some code to point it in the right place and “tell” the solution how to use and structure the data, and its interrelationship to everything else.  But at least we have a means for reducing the cost associated with pre-processing.  Note that we are, in effect, still pre-processing data.  We just let the CPU do the grunt work for us, oftentimes very quickly, while giving us control over the decision of relative significance.

So now let’s take the meme that I described above and add greater complexity to it.  You have all kinds of data coming into the stream in all kinds of formats including specialized XML, open, black-boxed data, and closed proprietary files.  This data is non-structured.  It is then processed and “dumped” into a non-relational database such as NoSQL.  How do we approach this data?  The answer has been to return to a hybrid of pre-processing, data mining, and the use of APIs.  But note that there is no silver bullet here.  These efforts are long-term and extremely labor intensive at this point.  There is no magic.  I have heard time and again from decision makers the question: “why can’t we just dump the data into a database to solve all our problems?”  No, you can’t, unless you’re ready for a significant programmatic investment in data scientists, database engineers, and other IT personnel.  At the end, what they deploy, when it gets deployed, may very well be obsolete and have wasted a good deal of money.

So, once again, what are the proper alternatives?  In my experience we need to get back to first principles.  Each business and industry has commonalities that transcend proprietary software limitations by virtue of the professions and disciplines that comprise them.  Thus, it is domain expertise to the specific business that drives the solution.  For example, in program and project management (you knew I was going to come back there) a schedule is a schedule, EVM is EVM, financial management is financial management.

Software manufacturers will, apart from issues regarding relative ease of use, scalability, flexibility, and functionality, attempt to defend their space by establishing proprietary lexicons and data structures.  Not being open, while not serving the needs of customers, helps incumbents avoid disruption from new entries.  But there often comes a time when it is apparent that these proprietary definitions are only euphemisms for a well-understood concept in a discipline or profession.  Cat = Feline.  Dog = Canine.

For a cohesive and well-defined industry the solution is to make all data within particular domains open.  This is accomplished through the acceptance and establishment of a standard schema.  For less cohesive industries, but where the data or incumbents through the use of common principles have essentially created a de facto schema, APIs are the way to extract this data for use in analytics.  This approach has been applied on a broader basis for the incorporation of machine data and signatures in social networks.  For closed or black-boxed data, the business or industry will need to execute gap analysis in order to decide if database access to such legacy data is truly essential to its business, or given specification for a more open standard from “time-now” will eventually work out suboptimization in data.

Most important of all and in the end, our results must provide metrics and visualizations that can be understood, are valid, important, material, and be right.

Measure for Measure — Must Read: Dave Gordon Is Looking for Utilitarian Metrics at AITS.org

Dave Gordon at his AITS.org blog deals with the issue of metrics and what makes them utilitarian, this is, “actionable.”  Furthermore at his Practicing IT Project Management blog he challenges those in the IT program management community to share real life examples.  The issue of measures and whether they pass the “so-what?” test in an important one, since chasing, and drawing improper conclusions from, the wrong ones are a waste of money and effort at best, and can lead one to make very bad business decisions at worst.

In line with Dave’s challenge, listed below are the types of metrics (or measures) that I often come across.

1.  Measures of performance.  This type of metric is characterized by actual performance against a goal for a physical or functional attribute of the system being developed.  It can be measured across time as one of the axes, but the ultimate benchmark against what is being measured is against the requirement or goal.  Technical performance measurements often fall into this category, though I have seen instances where these TPM is listed in its own category.  I would argue that such separation is artificial.

2.  Measures of progress.  This type of metric is often time-based, oftentimes measured against a schedule or plan.  Measurement of schedule variances in terms of time or expenditure rates against a budget often fall into this category.

3.  Measures of compliance.  This type of metric is one that measures systemic conditions that must be met which, if not, indicates a fatal error in the integrity of the system.

4.  Measures of effectiveness.  This type of metric tracks against those measures related to the operational objectives of the project, usually specified under particular conditions.

5.  Measures of risk.  This type of metric measures quantitatively the effects of qualitative, systemic, and inherent risk.  Oftentimes qualitative and quantitative risk are separated, which is the means of identification and whether that means is recorded either indirectly or directly.  But, in reality, they are measuring different aspects and causes of the same phenomenon.

6.  Measures of health.  This type of metric measures the relative health of a system against a set of criteria.  In medicine there are a set of routine measures for biological subjects.  Measures of health distinguish themselves from measures of compliance in that any variation, while indicative of a possible problem, is not necessarily fatal.  Thus, a range of acceptable indicators or even some variation within the indicators can be acceptable.  So while these measures may point to a system issue, borderline areas may warrant additional investigation.

In any project management system, there are often correct and incorrect ways of constructing these measures.  The basis for determining whether they are correct, I think, is whether the end result metric possesses materiality and traceability to a particular tangible state or criteria.  According to Dave and others, a test of a good metric is whether it is “actionable”.  This is certainly a desirable characteristic, but I would suggest not a necessary one and is contained within materiality and traceability.

For example, some metrics are simply indicators, which suggest further investigation; others suggest an action when viewed in combination with others.  There is no doubt that the universe of “qualitative” measures is shrinking as we have access to bigger and better data that provide us with quantification.  Furthermore as stochastic and other mathematical tools develop, we will have access to more sophisticated means of measurement.  But for the present there will continue to be some of these non-quantifiable measures only because, with experience, we learn that there are different dimensions in measuring the behavior of complex adaptive systems over time that are yet to be fully understood, much less measured.

I also do not mean for this to be an exhaustive list.  Others that have some overlap to what I’ve listed come to mind, such as measures of efficiency (different than effectiveness and performance in some subtle ways), measures of credibility or fidelity (which has some overlap with measures of compliance and health, but really points to a measurement of measures), and measures of learning or adaptation, among others.

Over at AITS.org — Black Swans: Conquering IT Project Failure & Acquisition Management

It’s been out for a few days but I failed to mention the latest article at AITS.org.

In my last post on the Blogging Alliance I discussed information theory, the physics behind software development, the economics of new technology, and the intrinsic obsolescence that exists as a result. Dave Gordon in his regular blog described this work as laying “the groundwork for a generalized theory of managing software development and acquisition.” Dave has a habit of inspiring further thought, and his observation has helped me focus on where my inquiries are headed…

To read more please click here.

Forget Domani — The Inevitability of Software Transitioning and How to Facilitate the Transition

The old Perry Como* chestnut refers to the Italian word “tomorrow” and is the Italian way of repeating–in a more romantic manner–Keyne’s dictum that in the “long run we’ll all be dead.”  Whenever I hear polemicists talk about the long run or invoke the interests of their grandchildren trumping immediate concerns and decisions I always brace myself for the Paleolithic nonsense that is to follow.  While giving such opinions a gloss of plausibility, at worst, they are simply fabrications to hide self-interest, a form of tribalism, or ideology, at best, they are based on fallacious reasoning, fear, or the effects of cognitive dissonance.

While not as important as the larger issues affecting society, we see this same type of thinking when people and industries are faced with rapid change in software.  I was reminded of this when I sat down to lunch with a colleague who was being forced to drop an established software system being used in project management.  “We spent so much time and money to get it to finally work the way we want it, and now we are going to scrap it,” he complained.  Being a good friend–and knowing the individual as being thoughtful when expressing opinions–I pressed him a bit.  “But was your established system doing what it needed to do to meet your needs?”  He thought a moment.  “Well, it served our needs up to now, but it was getting very expensive to maintain and took a lot of workarounds.  Plus the regulatory requirements in our industry are changing and it can’t make the jump.”  When I pointed out that it sounded as the decision to transition then was the right one he ended with:  “Yes, but I’m within a couple of years of retirement and I don’t need another one of these.”

Thus, within the space of one conversation were the reasons that we all usually hear as excuses for not transitioning to a new software.  In markets that are dominated by a few players with aging and soon to be obsolete software this is a common refrain.  Any one of these rationales, put in the mouth of a senior decision-maker, will kill a deal.  Other rationales are based in a Sharks vs. Jets mentality, in which the established software user community rallies around the soon to be obsolete application.  This is particularly prevalent in enterprise software environments.  This is usually combined with uninformed attacks, sometimes initiated by the established market holder directly or through proxies, about the reliability, scale, and functionality of the new entries.  The typical defensive maneuver is to declare that at some undetermined date in the future–domani–that an update is on the way that will match or exceed what the present applications possess.  Hidden from the non-tech savvy is the reality that the established software is written in old technology and language, oftentimes requiring an entire rewrite that will take years.  Though possessing the same brand name the “upgrade” will, in effect, be new, untested software written in haste to defend market.

As a result of many years marketing and selling various software products, certain of which were and are game-changing in their respective markets, I have compiled a list of typical objections to software transitioning and the means of addressing these concerns.  One should not take this as an easy “how-to” guide.  There is no substitute for understanding your market, understanding the needs of the customer, having the requisite technical knowledge of the regulatory and systematic requirements of the market, and possessing a concern for the livelihood for your customers that is then translated into a trusting and mutually respectful relationship.  If software is just a euphemism for making money–and there are some very successful companies that take this approach–this is not the blog post for you: you might as well be selling burgers and tacos.

1.  Sunk vs. Opportunity Costs.  This is an old one and I find it interesting that this thinking persists.  The classic comparison in understanding the fallacy of sunk cost was first brought up in a class when I was attending Pepperdine University many years ago.  A friend of the professor couldn’t decide if he should abandon the expensive TV antenna he had purchased just a year before in favor of the new-fangled cable television hookup that was just introduced into his neighborhood.  The professor explained to his friend that the money he spent on the antenna was irrelevant to his decision.  That money was gone–it was “sunk” into the old technology.  The relevant question was: what was the cost of not taking the best alternative now, that is, what is the cost of not putting a resource to its best use.  When we persist in using old technologies to address new challenges there comes a point where the costs associated with that old technology no longer are the most effective use of resources in that regard.  That is the point at which the change must occur.  In practical matters, if the overhead associated with the old technology is too high given the payoff, there are gaps and workarounds in using the old technology that sub-optimize and waste resources, then it is time to make a change.  The economics dictates it, and this can be both articulated and demonstrated using a business case.

2.  Need vs. Want.  Being a techie, I often fall into the same trap of most techies in which some esoteric operation or functionality is achieved and I marvel at it.  Then when I show it to a non-techie I am puzzled when the intended market responds with a big yawn.  Within this same category are people on the customer side of the equation who are looking at the latest technologies, but do not have an immediate necessity that propels the need for a transition.  This is often looked at as just “checking in” and, on the sales side, the equivalent of kicking the tires.  These opposing examples outline one of the core elements that will support a transition:  in most cases businesses will buy when they have a need, as opposed to a want.  Understanding the customers needs–and what propels a change based on necessity–whether it be due to a shift in the regulatory or technological environment that changes the baseline condition, is the key to understanding how to support a transition.  This assumes, of course, that the solution one is offering meets the baseline condition to support the shift.  Value and pricing also enter into this equation.  I remember dealing with a software company a few years ago where I noted that their pricing was much too high for the intended market.  “But we offer so much more than our competition” came the refrain.  The problem, however, was that the market did not view the additional functionality as essential.  Any price multiplied by zero equals zero, regardless of how we view the value of an offering.

3.  Acts of Omission and Acts of Commission.  The need for technological transition is, once again, dictated by a need of the organization due to either internal or external factors.  In my career as a U.S. Navy officer we are trained to make decisions and take vigorous action whenever presented with a challenge.  The dictum in this case is that an act of commission, that is, having taken diligent and timely action due to a perceived threat, is defensible, even if someone second guesses those decisions and is critical of them down the line, but an act of omission, ignoring a threat or allowing events to unfold on their own, is always unforgiveable.  Despite the plethora of books, courses, and formal education regarding leadership, there is still a large segment of business and government that prefer to avoid risk by avoiding making decisions.  Businesses operating at optimum effectiveness perform under a sense of urgency.  Software providers, however, must remember that their sense of urgency in making a sale does not mean that the prospective customer’s sense of urgency is in alignment.  A variation of the need vs. want factor, in this case understanding the business and then effectively communicating to the customer those events that are likely to occur due to non-action, is the key component in overcoming this roadblock.  Once again, this is assuming that the proposed solution actually addresses the risk associated with an act of omission.

4.  Not Invented Here.  I have dealt with this challenge in a previous blog post.  Establishing a learning organization is essential under the new paradigm of project management, in which there is more emphasis on a broader sense of integration across what were previously identified as the divisions of labor in the community.  Hand-in-hand with this challenge is the perception, often based on lack of information, that the requirements needed by the organization are so unique to it that only a ground-up, customized solution will do, usually militating against commercial-off-the-shelf (COTS) technologies.  This often takes the form of internal IT shops building business cases to internally develop the system directly to code, or in supporting environments in which users have filled the gaps in their systems with Excel spreadsheets that various users had constructed.  In one case the objection to the proposed COTS solution was based on the rationale that the users they “really liked” their pivot tables.  (Repeat after me:  Excel is not a project management system, Excel is not a project management system, Excel is not a project management system).  As we drive toward integration of more data involving millions of records, such rationales are easily engaged.  This assumes, however, that the software provider possesses a solution that is both powerful and flexible, that is, one that can both handle Big Data and integrate data, not just through data conversion, normalization, and rationalization, but also through the precise use of APIs.  In this last case, we are not talking about glorified query engines against SQL tables but systems that have built-in smarts inherited from the expertise of the developers to properly identify and associate data so that it transformed into information that establishes an effective project management and control environment.

5.  I Heard it Through the Grapevine.  Nothing is harder to overcome than a whisper campaign generated by competitors or their proxies.  I know of companies in which enterprise systems involving billions of dollars of project value being successfully implemented only to have the success questioned in a meeting by the spread of disinformation, or the success acknowledged in a backhand manner.  The response to this kind of challenge is to put the decision makers in direct touch with your customers.  In addition, live demos using releasable data or notional data that is equivalent to the customer’s work in demonstrating functionality is essential.  Finally, the basics of software economics dictate that for an organization to understand whether a solution is appropriate for their needs, that there needs to be some effort in terms of time and resources expended in evaluating the product.  For those offering solutions, the key in effectively communicating the value of your product and not falling into a trap of your competitors’ making, is to ensure that the pilot does not fall into a trained monkey test in which potentially unqualified individuals attempt to operate the software on their own with little or no supervision, training, and lacking effective communication to support the pilot in the same way that an implementation would normally be handled.  Propose a pilot that is structured, has a time limit, a limit to scope, and in which direct labor and travel, if necessary, is reimbursed.  If everyone is professional and serious this will be a reasonable approach that will ensure a transparent process for both parties.

6.  The Familiar and the Unknown.  Given the high failure rate associated with IT projects, one can understand the hesitancy of decision makers to take that step.  A bad decision in selecting a system can, and have, brought organizations to their knees.  Furthermore, studies in human behavior demonstrate that people tend to favor those things that are familiar, even in cases where a possible alternative is better, but unknown.  This is known as the mere-exposure effect.  Daniel Kahneman in the groundbreaking book Thinking Fast and Slow, outlines other cognitive fallacies built into our wiring.  New media and technology only magnify these effects.  The challenge, then, is for the new technological solution provider to address the issue of familiarity directly. Toward this end, software providers must establish trust and rapport with their market, prove their expertise not just in technical matters of software and computing, but also regarding the business processes and needs of the market, and establish their competency in issues affecting the market.  A proven track record of honesty, open communication, and fair dealing are also essential to overcoming this last challenge.

*I can’t mention this song without also noting that the Chairman of the Board, Frank Sinatra, recorded a great version of it, as did Mario Lanza, and that Connie Francis also made it a hit in the 1960s.  It was also the song that Katyna Ranieri made famous in the Shirley MacLaine movie The Yellow Rolls Royce.

Ch-ch Changes — Software Implementations and Organizational Process Improvement

Dave Gordon at The Practicing IT Project Manager lists a number of factors that define IT project success.  Among these is “Organizational change management efforts were sufficient to meet adoption goals.”  This is an issue that I am grappling with now on many fronts.

The initial question that comes to mind is which comes first–the need for organizational improvement or the transformation that comes results as a result of the introduction of new technology?  “Why does this matter?” one may ask.  The answer is that it defines how things are perceived by those that are being affected (or victimized) by the new technology.  This will then translate into various behaviors.  (Note that I did not say that “Perception is reality.”  For the reason why please consult the Devil’s Phraseology.)

This is important because the groundwork laid (or not laid) for the change that is to come will then translate into sub-factors (accepting Dave’s taxonomy of factors for success) that will have a large impact on the project, and whether it is defined as a success.  In getting something done the most overriding priority is not just “Gettin’ ‘Er Done.”  The manner in which our projects, particularly in IT, are executed and the technology introduced and implemented will determine the success of a number of major factors that contribute to overall project success.

Much has been written lately about “disruptive” change, and that can be a useful analogy when applied to new technologies that transform a market by providing something that is cheaper, better, and faster (with more functionality) than the market norm.  I am driving that type of change in my own target markets.  But that is in a competitive environment.  Judgement–and good judgement–requires that we not inflict this cultural approach on the customer.

The key, I think, is bringing back a concept and approach that seems to have been lost in the shuffle: systems analysis and engineering that works hand-in-hand with the deployment of the technological improvement.  There was a reason for asking for the technology in the first place, whether it be improved communications, improved productivity, or qualitative factors.  Going in willy-nilly with a new technology that provides unexpected benefits–even if those benefits are both useful and will improve the work process–can often be greeted with fear, sabotage, and obstruction.

When those of us who work with digital systems encounter someone challenged by the new introduction of technology or fear that “robots are taking our jobs,” our reaction is often an eye-roll, treating these individuals as modern Luddites.  But that is a dangerous stereotype.  Our industry is rife with stories of individuals who fall into this category.  Many of them are our most experienced middle managers and specialists who predate the technology being introduced.  How long does it take to develop the expertise to fill these positions?  What is the cost to the organization if their corporate knowledge and expertise is lost?  Given that they have probably experienced multiple reorganizations and technology improvements, their skepticism is probably warranted.

I am not speaking of the exception–the individual who would be opposed to any change.  Dave gives a head nod to the CHAOS report, but we also know that we come upon these reactions often enough to be documented from a variety of sources.  So how to we handle these?

There are two approaches.  One is to rely upon the resources and management of the acquiring organization to properly prepare the organization for the change to come, and to handle the job of determining the expected end state of the processes, and the personnel implications that are anticipated.  Another is for the technology provider to offer this service.

From my own direct experience, what I see is a lack of systems analysis expertise that is designed to work hand-in-hand with the technology being introduced.  For example, systems analysis is a skill that is all but gone in government agencies and large companies, which rely more and more on outsourcing for IT support.  Oftentimes the IT services consultant has its own agenda, which oftentimes conflicts with the goals of both the manager acquiring the technology and the technology provider.  Few outsourced IT services contracts anticipate that the consultant must act as an enthusiastic–as opposed to tepid (at best) willing–partner in these efforts.  Some agencies lately have tasked the outsourced IT consultant to act as honest broker to choose the technology, mindless of the strategic partnering and informal relationships that will result in a conflict of interest.

Thus, technology providers must be mindful of their target markets and design solutions to meet the typical process improvement requirements of the industry.  In order to do this the individuals involved must have a unique set of skills that combines a knowledge of the goals of the market actors, their processes, and how the technology will improve those processes.  Given this expertise, technology providers must then prepare the organizational environment to set expectations and to advance the vision of the end state–and to ensure that the customer accepts that end state.  It is then up to the customer’s management, once the terms of expectations and end-state have been agreed, to effectively communicate them to those personnel affected, and to do so in a way to eliminate fear and to generate enthusiasm that will ensure that the change is embraced and not resisted.