Forget Domani — The Inevitability of Software Transitioning and How to Facilitate the Transition

The old Perry Como* chestnut refers to the Italian word “tomorrow” and is the Italian way of repeating–in a more romantic manner–Keyne’s dictum that in the “long run we’ll all be dead.”  Whenever I hear polemicists talk about the long run or invoke the interests of their grandchildren trumping immediate concerns and decisions I always brace myself for the Paleolithic nonsense that is to follow.  While giving such opinions a gloss of plausibility, at worst, they are simply fabrications to hide self-interest, a form of tribalism, or ideology, at best, they are based on fallacious reasoning, fear, or the effects of cognitive dissonance.

While not as important as the larger issues affecting society, we see this same type of thinking when people and industries are faced with rapid change in software.  I was reminded of this when I sat down to lunch with a colleague who was being forced to drop an established software system being used in project management.  “We spent so much time and money to get it to finally work the way we want it, and now we are going to scrap it,” he complained.  Being a good friend–and knowing the individual as being thoughtful when expressing opinions–I pressed him a bit.  “But was your established system doing what it needed to do to meet your needs?”  He thought a moment.  “Well, it served our needs up to now, but it was getting very expensive to maintain and took a lot of workarounds.  Plus the regulatory requirements in our industry are changing and it can’t make the jump.”  When I pointed out that it sounded as the decision to transition then was the right one he ended with:  “Yes, but I’m within a couple of years of retirement and I don’t need another one of these.”

Thus, within the space of one conversation were the reasons that we all usually hear as excuses for not transitioning to a new software.  In markets that are dominated by a few players with aging and soon to be obsolete software this is a common refrain.  Any one of these rationales, put in the mouth of a senior decision-maker, will kill a deal.  Other rationales are based in a Sharks vs. Jets mentality, in which the established software user community rallies around the soon to be obsolete application.  This is particularly prevalent in enterprise software environments.  This is usually combined with uninformed attacks, sometimes initiated by the established market holder directly or through proxies, about the reliability, scale, and functionality of the new entries.  The typical defensive maneuver is to declare that at some undetermined date in the future–domani–that an update is on the way that will match or exceed what the present applications possess.  Hidden from the non-tech savvy is the reality that the established software is written in old technology and language, oftentimes requiring an entire rewrite that will take years.  Though possessing the same brand name the “upgrade” will, in effect, be new, untested software written in haste to defend market.

As a result of many years marketing and selling various software products, certain of which were and are game-changing in their respective markets, I have compiled a list of typical objections to software transitioning and the means of addressing these concerns.  One should not take this as an easy “how-to” guide.  There is no substitute for understanding your market, understanding the needs of the customer, having the requisite technical knowledge of the regulatory and systematic requirements of the market, and possessing a concern for the livelihood for your customers that is then translated into a trusting and mutually respectful relationship.  If software is just a euphemism for making money–and there are some very successful companies that take this approach–this is not the blog post for you: you might as well be selling burgers and tacos.

1.  Sunk vs. Opportunity Costs.  This is an old one and I find it interesting that this thinking persists.  The classic comparison in understanding the fallacy of sunk cost was first brought up in a class when I was attending Pepperdine University many years ago.  A friend of the professor couldn’t decide if he should abandon the expensive TV antenna he had purchased just a year before in favor of the new-fangled cable television hookup that was just introduced into his neighborhood.  The professor explained to his friend that the money he spent on the antenna was irrelevant to his decision.  That money was gone–it was “sunk” into the old technology.  The relevant question was: what was the cost of not taking the best alternative now, that is, what is the cost of not putting a resource to its best use.  When we persist in using old technologies to address new challenges there comes a point where the costs associated with that old technology no longer are the most effective use of resources in that regard.  That is the point at which the change must occur.  In practical matters, if the overhead associated with the old technology is too high given the payoff, there are gaps and workarounds in using the old technology that sub-optimize and waste resources, then it is time to make a change.  The economics dictates it, and this can be both articulated and demonstrated using a business case.

2.  Need vs. Want.  Being a techie, I often fall into the same trap of most techies in which some esoteric operation or functionality is achieved and I marvel at it.  Then when I show it to a non-techie I am puzzled when the intended market responds with a big yawn.  Within this same category are people on the customer side of the equation who are looking at the latest technologies, but do not have an immediate necessity that propels the need for a transition.  This is often looked at as just “checking in” and, on the sales side, the equivalent of kicking the tires.  These opposing examples outline one of the core elements that will support a transition:  in most cases businesses will buy when they have a need, as opposed to a want.  Understanding the customers needs–and what propels a change based on necessity–whether it be due to a shift in the regulatory or technological environment that changes the baseline condition, is the key to understanding how to support a transition.  This assumes, of course, that the solution one is offering meets the baseline condition to support the shift.  Value and pricing also enter into this equation.  I remember dealing with a software company a few years ago where I noted that their pricing was much too high for the intended market.  “But we offer so much more than our competition” came the refrain.  The problem, however, was that the market did not view the additional functionality as essential.  Any price multiplied by zero equals zero, regardless of how we view the value of an offering.

3.  Acts of Omission and Acts of Commission.  The need for technological transition is, once again, dictated by a need of the organization due to either internal or external factors.  In my career as a U.S. Navy officer we are trained to make decisions and take vigorous action whenever presented with a challenge.  The dictum in this case is that an act of commission, that is, having taken diligent and timely action due to a perceived threat, is defensible, even if someone second guesses those decisions and is critical of them down the line, but an act of omission, ignoring a threat or allowing events to unfold on their own, is always unforgiveable.  Despite the plethora of books, courses, and formal education regarding leadership, there is still a large segment of business and government that prefer to avoid risk by avoiding making decisions.  Businesses operating at optimum effectiveness perform under a sense of urgency.  Software providers, however, must remember that their sense of urgency in making a sale does not mean that the prospective customer’s sense of urgency is in alignment.  A variation of the need vs. want factor, in this case understanding the business and then effectively communicating to the customer those events that are likely to occur due to non-action, is the key component in overcoming this roadblock.  Once again, this is assuming that the proposed solution actually addresses the risk associated with an act of omission.

4.  Not Invented Here.  I have dealt with this challenge in a previous blog post.  Establishing a learning organization is essential under the new paradigm of project management, in which there is more emphasis on a broader sense of integration across what were previously identified as the divisions of labor in the community.  Hand-in-hand with this challenge is the perception, often based on lack of information, that the requirements needed by the organization are so unique to it that only a ground-up, customized solution will do, usually militating against commercial-off-the-shelf (COTS) technologies.  This often takes the form of internal IT shops building business cases to internally develop the system directly to code, or in supporting environments in which users have filled the gaps in their systems with Excel spreadsheets that various users had constructed.  In one case the objection to the proposed COTS solution was based on the rationale that the users they “really liked” their pivot tables.  (Repeat after me:  Excel is not a project management system, Excel is not a project management system, Excel is not a project management system).  As we drive toward integration of more data involving millions of records, such rationales are easily engaged.  This assumes, however, that the software provider possesses a solution that is both powerful and flexible, that is, one that can both handle Big Data and integrate data, not just through data conversion, normalization, and rationalization, but also through the precise use of APIs.  In this last case, we are not talking about glorified query engines against SQL tables but systems that have built-in smarts inherited from the expertise of the developers to properly identify and associate data so that it transformed into information that establishes an effective project management and control environment.

5.  I Heard it Through the Grapevine.  Nothing is harder to overcome than a whisper campaign generated by competitors or their proxies.  I know of companies in which enterprise systems involving billions of dollars of project value being successfully implemented only to have the success questioned in a meeting by the spread of disinformation, or the success acknowledged in a backhand manner.  The response to this kind of challenge is to put the decision makers in direct touch with your customers.  In addition, live demos using releasable data or notional data that is equivalent to the customer’s work in demonstrating functionality is essential.  Finally, the basics of software economics dictate that for an organization to understand whether a solution is appropriate for their needs, that there needs to be some effort in terms of time and resources expended in evaluating the product.  For those offering solutions, the key in effectively communicating the value of your product and not falling into a trap of your competitors’ making, is to ensure that the pilot does not fall into a trained monkey test in which potentially unqualified individuals attempt to operate the software on their own with little or no supervision, training, and lacking effective communication to support the pilot in the same way that an implementation would normally be handled.  Propose a pilot that is structured, has a time limit, a limit to scope, and in which direct labor and travel, if necessary, is reimbursed.  If everyone is professional and serious this will be a reasonable approach that will ensure a transparent process for both parties.

6.  The Familiar and the Unknown.  Given the high failure rate associated with IT projects, one can understand the hesitancy of decision makers to take that step.  A bad decision in selecting a system can, and have, brought organizations to their knees.  Furthermore, studies in human behavior demonstrate that people tend to favor those things that are familiar, even in cases where a possible alternative is better, but unknown.  This is known as the mere-exposure effect.  Daniel Kahneman in the groundbreaking book Thinking Fast and Slow, outlines other cognitive fallacies built into our wiring.  New media and technology only magnify these effects.  The challenge, then, is for the new technological solution provider to address the issue of familiarity directly. Toward this end, software providers must establish trust and rapport with their market, prove their expertise not just in technical matters of software and computing, but also regarding the business processes and needs of the market, and establish their competency in issues affecting the market.  A proven track record of honesty, open communication, and fair dealing are also essential to overcoming this last challenge.

*I can’t mention this song without also noting that the Chairman of the Board, Frank Sinatra, recorded a great version of it, as did Mario Lanza, and that Connie Francis also made it a hit in the 1960s.  It was also the song that Katyna Ranieri made famous in the Shirley MacLaine movie The Yellow Rolls Royce.

When You’re a Jet You’re a Jet all the Way — Software as a Change Agent for Professional Development

Earlier in the week Dave Gordon at his blog responded to my post on data normalization and rightly introduced the need for data rationalization.  I had omitted the last concept in my own post, but strongly implied that the two were closely aligned in my broad definition of normalization beyond the boundaries of eliminating redundancies.  In the end in thinking about this, I prefer Dave’s dichotomy because it more clearly defines what we are doing.

Later in the week I found myself elaborating on these issues in discussions with customers and other professionals in the project management discipline.  In the projects in which I am involved, what I have found is that the process of normalizing and rationalizing data, even historical data which, contrary to Dave’s assertion can be maintained–at least in my business–acts as a change agent in defining the agnostic characteristics what defines the type of data being normalized and rationalized.

What I mean here is that, for instance, a time-phased CPM schedule that eventually becomes an integrated master schedule has an analogue.  For years we have been told, mostly by marketing types working for software manufacturers, that there is a secret sauce that they provide that cannot be reconciled against their competitors.  As a result, entire professional organizations, conferences, white papers, and presentations have been given to prove this assertion.  When looking at the data, however, the assertion is invalid.

The key differentiator between CPM scheduling applications is the optimization engine.  That is the secret sauce and the black box where the valuable IP lies.  It is the algorithms in the optimization that identifies for us those schedule activities that are on the critical and near critical paths.  But when you run these engines side by side on the same schedule, their results are well within one standard deviation of one another.

Keep in mind that I’m talking about differences in data related to normalization and rationalization and whether these differences can be reconciled.  There are other differences in features between the applications that do make a difference in their use and functionality: whether they can lock down a baseline, manage multiple baselines, prevent future work from being planned and executed in the past (yes, this happens), handle hammocks, scale properly, etc.  Because of these functional differences the same data may have been given a different value in the table or the file.  As Dave Gordon rightly points out, reconciling what on the surface are irreconcilable values requires specialized knowledge.  Well, if you have that specialized knowledge then you can achieve what otherwise seems impossible.  Once you achieve this “impossible” feat, it quickly becomes apparent that the features and functions involved are based on a very limited number of values that are common across CPM scheduling applications.

This should not be surprising for those of you out there that have been doing this a long time.  Back in the 1980s we would use visual display boards to map out short segments of the schedule.  We would manually construct schedules in very rudimentary (by today’s standards) mainframe computers and get very long dot matrix representations of the schedule to tape to the “War Room” walls.  The resources, risks, etc. had to be drawn on the schedule.  This manual process required an understanding of CPM schedule construction similar to someone still using long division today.  There was actually a time when people had to memorize their log and square root tables.  it was not very efficient, but the deep understanding of the analogue schedule has since been lost with the introduction of new technology.  This came to mind when I saw on LinkedIn a question of the types of questions that should be asked of a master scheduler in an interview.

As a result of new technology, schedulers aligned themselves into camps based on the application they selected or was selected for them in their job.  Over time I have seen brand loyalty turn into partisanship.  Once again, this should not be surprising.  If you spent ten years of your career on a very popular scheduling application, anything that may undermine one’s investment in that choice–and which makes employment possible–will be deemed as a threat.

I first came upon this behavior years ago when I was serving as CIO for a project management organization.  Some PMOs could not share information–not even e-mail and documents–because most were using PCs and some were using Macs.  Problem was that the key PMO was using Mac.  This was before Microsoft and Apple got together and solved this for us.  Needless to say this undermined organizational effectiveness.  My attempt to get everyone on the same page in terms of operating system compatibility sparked a significant backlash.  Luckily for me Microsoft soon introduced its first solution to address this issue.  So, in the end, the “Macintites”, as we good naturedly called them, could use their Macs for business common to other parts of the organization.

This almost cultish behavior finds itself in new places today: in the iPhone and Droid wars, in the use of Agile, and among CPM scheduling application partisans.  It is true that those of us in the software industry certainly want to see brand loyalty.  It is one of the key measures of success in proving the product’s value and effectiveness.  But it need not undermine the fact that a scheduler is a key specialist in the project management discipline.  If you are a Jet, you don’t need to be a Jet all the way.

Since creating generic analogues of schedules from submitted third-party data, I have found that insights into project performance can be noted that previously were not available.  The power of digitization along with normalization and rationalization allows the data to be effectively integrated at the proper point of intersection with other dimensions of project performance such as cost performance and risk.  Freed from the shackles of having to learn the specific idiosyncrasies of particular applications, the deep understanding of scheduling is being reintroduced.  This is a long time in coming.

Days of Future Passed — Legacy Data and Project Parametrics

I’ve had a lot of discussions lately on data normalization, including being asked the question of what constitutes normalization when dealing with legacy data, specifically in the field of project management.  A good primer can be found at About.com, but there are also very good older papers out on the web from various university IS departments.  The basic principals of data normalization today consist of finding a common location in the database for each value, reducing redundancy, properly establishing relationships among the data elements, and providing flexibility so that the data can be properly retrieved and further processed into intelligence in such as way as the objects produced possess significance.

The reason why answering this question is so important is because our legacy data is of such a size and of such complexity that it falls into the broad category of Big Data.  The condition of the data itself provides wide variations in terms of quality and completeness.  Without understanding the context, interrelationships, and significance of the elements of the data, the empirical approach to project management is threatened, since being able to use this data for purposes of establishing trends and parametric analysis is limited.

A good paper that deals with this issue was authored by Alleman and Coonce, though it was limited to Earned Value Management (EVM).  I would argue that EVM, especially in the types of industries in which the discipline is used, is pretty well structured already.  The challenge is in the other areas that are probably of more significance in getting a fuller understanding of what is happening in the project.  These areas of schedule, risk, and technical performance measures.

In looking at the Big Data that has been normalized to date–and I have participated with others in putting a significant dent in this area–it is apparent that processes in these other areas lack discipline, consistency, completeness, and veracity.  By normalizing data in sub-specialties that have experienced an erosion in enforcing standards of quality and consistency, technology becomes a driver for process improvement.

A greybeard in IT project management once said to me (and I am not long in joining that category): “Data is like water, the more it flows downstream the cleaner it becomes.”  What he meant is that the more that data is exposed in the organizational stream, the more it is questioned and becomes a part of our closed feedback loop: constantly being queried, verified, utilized in decision making, and validated against reality.  Over time more sophisticated and reliable statistical methods can be applied to the data, especially if we are talking about performance data of one sort or another, that takes periodic volatility into account in trending and provides us with a means for ensuring credibility in using the data.

In my last post on Four Trends in Project Management, I posited that the question wasn’t more or less data but utilization of data in a more effective manner, and identifying what is significant and therefore “better” data.  I recently heard this line repeated back to me as a means of arguing against providing data.  This conclusion was a misreading of what I was proposing.  One level of reporting data in today’s environment is no more work than reporting on any other particular level of a project hierarchy.  So cost is no longer a valid point for objecting to data submission (unless, of course, the one taking that position must admit to the deficiencies in their IT systems or the unreliability of their data).

Our projects must be measured against the framing assumptions in which they were first formed, as well as the established measures of effectiveness, measures of performance, and measures of technical achievement.  In order to view these factors one must have access to data originating from a variety of artifacts: the Integrated Master Schedule, the Schedule and Cost Risk Analysis, and the systems engineering/technical performance plan.  I would propose that project financial execution metrics are also essential in getting a complete, integrated, view of our projects.

There may be other supplemental data that is necessary as well.  For example, the NDIA Integrated Program Management Division has a proposed revision to what is known as the Integrated Baseline Review (IBR).  For the uninitiated, this is a process in which both the supplier and government customer project teams can come together, review the essential project artifacts that underlie project planning and execution, and gain a full understanding of the project baseline.  The reporting systems that identify the data that is to be reported against the baseline are identified and verified at this review.  But there are also artifacts submitted here that contain data that is relevant to the project and worthy of continuing assessment, precluding manual assessments and reviews down the line.

We don’t yet know the answer to these data issues and won’t until all of the data is normalized and analyzed.  Then the wheat from the chaff can be separated and a more precise set of data be identified for submittal, normalized and placed in an analytical framework to give us more precise information that is timely so that project stakeholders can make decisions in handling any risks that manifest themselves during the window that they can be handled (or make the determination that they cannot be handled).  As the farmer says in the Chinese proverb:  “We shall see.”

Brother Can You (Para)digm? — Four of the Latest Trends in Project Management

At the beginning of the year we are greeted with the annual list of hottest “project management trends” prognostications.  We are now three months into the year and I think it worthwhile to note the latest developments that have come up in project management meetings, conferences, and in the field.  Some of these are in alignment with what you may have seen in some earlier articles, but these are four that I find to be most significant thus far, and there may be a couple of surprises for you here.

a.  Agile and Waterfall continue to duke it out.  As the term Agile is adapted and modified to real world situations, the cult purists become shriller in attempting to enforce the Manifesto that may not be named.  In all seriousness, it is not as if most of these methods had not been used previously–and many of the methods, like scrum, also have their roots in Waterfall and earlier methods.  A great on-line overview and book on the elements of scrum can be found at Agile Learning Labs.  But there is a wide body of knowledge out there concerning social and organizational behavior that is useful in applying what works and doesn’t work.  For example, the observational science behind span of control, team building, the structure of the team in supporting organizational effectiveness, and the use of sprints in avoiding the perpetual death-spiral of adding requirements and not defining “done”, are best practices that identify successful teams (depending how you define success–keeping in mind that a successful team that produces the product often still fails as a going concern, and thus falls into obscurity).

All that being said, if you want to structure these best practices into a cohesive methodology, call it Agile, Waterfall or Harry, and can make money at it while helping people succeed in a healthy work environment, all power to you.  In IT, however, it is this last point that makes this particular controversy seem like we’ve been here before.  When woo-woo concepts like #NoEstimates and self-organization are thrown about, the very useful and empirical nature of the enterprise enters into magical thinking and ideology.  The mathematics of unsuccessful IT projects has not changed significantly since the shift to Agile.  From what one can discern from the so-called studies on the market, which are mostly anecdotal or based on unscientific surveys, somewhere north of 50% of IT projects fail, failure defined as behind schedule and over cost, or failing to meet functionality requirements.

Given this, Agile seems to be the latest belle to the ball and virtually any process improvement introducing scrum, teaming, and sprints seems to get the tag.  Still, there is much blood and thunder being expended for a result that amounts to the same (and probably less than the) mathematical chance of success as found in the coin flip.  I think for the remainder of the year the more acceptable and structured portions of Agile will get the nod.

b.  Business technology is now driving process.  This trend, I think, is why process improvements like Agile, that claim to be the panacea, cannot deliver on their promises.  As best practices they can help organizations avoid a net negative, but they rarely can provide a net positive.  Applying new processes and procedures while driving blind will still run you off the road.  The big story in 2015, I think, is the ability to handle big data and to integrate that data in a manner to more clearly reveal context to business stakeholders.  For years in A&D, DoD, governance, and other verticals engaged in complex, multi-year project management, we have seen the push and pull of interests regarding the amount of data that is delivered or reported.  With new technologies this is no longer an issue.  Delivering a 20GB file has virtually the same marginal cost as delivering a 10GB file.  Sizes smaller than 1G aren’t even worth talking about.

Recently I heard someone refer to the storage space required for all this immense data, it’s immense I tell you!  Well storage is cheap and large amounts of data can be accessed through virtual repositories using APIs and smart methods of normalizing data that requires integration at the level defined by the systems’ interrelationships.  There is more than one way to skin this cat, and more methods for handling bigger data are coming on-line every year.  Thus, the issue is not more or less data, but better data regardless of the size of the underlying file or table structure or the amount of information.  The first go-round of this process will require that all of the data available already in repositories be surveyed to determine how to optimize the information it contains.  Then, once transformed into intelligence, to determine the best manner of delivery so that it provides both significance and context to the decision maker.  For many organizations, this is the question that will be answered in 2015 and into 2016.  At that point it is the data that will dictate the systems and procedures needed to take advantage of this powerful advance in business intelligence.

c.  Cross-functional teams will soon morph into cross-functional team members.  As data originating from previously stove-piped competencies is integrated into a cohesive whole, the skillsets necessary to understand the data, know how to convert it into intelligence, and act appropriately on that intelligence will begin to shift to require a broader, multi-disciplinary understanding.  Businesses and organizations will soon find that they can no longer afford the specialist who only understands cost, schedule, risk, or any one aspect of the other various specialties that were dictated by the old line-and-staff and division of labor practices of the 20th century.  Businesses and organizations that place short term, shareholder, and equity holder interests ahead of the business will soon find themselves out of business in this new world.  The same will apply to organizations that continue to suppress and compartmentalize data.  This is because a cross-functional individual that can maximize the use of this new information paradigm requires education and development.  To achieve this goal dictates the need for the establishment of a learning organization, which requires investment and a long term view.  A learning organization exposes its members to become competent in each aspect of the business, with development including successive assignments of greater responsibility and complexity.  For the project management community, we will increasingly see the introduction of more Business Analysts and, I think, the introduction of the competency of Project Analyst to displace–at first–both cost analyst and schedule analyst.  Other competency consolidation will soon follow.

d.  The new cross-functional competencies–Business Analysts and Project Analysts–will take on an increasing role in design and deployment of technology solutions in the business.  This takes us full circle in our feedback loop that begins with big data driving process.  We are already seeing organizations that have implemented the new technologies and are taking advantage of new insights not only introducing new multi-disciplinary competencies, but also introducing new technologies that adapt the user environment to the needs of the business.  Once the business and project analyst has determined how to interact with the data and the systems necessary to the decision-making process that follows, adaptable technologies that do not take the hard-coded “one size fits all” user interfaces are, and will continue, to find wide acceptance.  Fewer off-line and one-off utilities that have been used to fill the gaps resulting from the deficiencies in inflexible hard-coded business applications will allow innovative approaches to analysis to be mainstreamed into the organization.  Once again, we are already seeing this effect in 2015 and the trend will only accelerate as possessing greater technological knowledge becomes an essential element of being an analyst.

Despite dire predictions regarding innovation, it appears that we are on the cusp of another rapid shift in organizational transformation.  The new world of big data comes with both great promise and great risks.  For project management organizations, the key in taking advantage of its promise and minimizing its risks is to stay ahead of the transformation by embracing it and leading the organization into positioning itself to reap its benefits.

Over at AITS.org Dave Gordon takes me to task on data normalization — and I respond with Data Neutrality

Dave Gordon at AITS.org takes me to task on my post regarding recommending using common schemas for certain project management data.  Dave’s alternative is to specify common APIs instead.   I am not one to dismiss alternative methods of reconciling disparate and, in their natural state, non-normalized data to find the most elegant solution.  My initial impression, though, is: been there, done that.

Regardless of the method used to derive significance from disparate sources of data that is of a common type, one still must obtain the cooperation of the players involved.  The ANSI X12 standard has been in use in the transportation industry for quite some time and has worked quite well, leaving the preference of proprietary solution up to the individual shippers.  The rule has been, however, that if you are going to write solutions for that industry that you need to allow the shipping info needed by any receiver to conform to a particular format so that it can be read regardless of the software involved.

Recently the U.S. Department of Defense, which had used certain ANSI X12 formats for particular data for quite some time has published and required a new set of schemas for a broader set of data under the rubric of the UN/CEFACT XML.  Thus, it has established the same approach as the transportation industry: taking an agnostic stand regarding software preferences while specifying that submitted data must conform to a common schema so that a proprietary file type is not given preference over another.

A little background is useful.  In developing major systems contractors are required to provide project performance data in order to ensure that public funds are being expended properly for the contracted effort.  This is the oversight responsibility portion of the equation.  The other side concerns project and program management.  Given the usual cost-plus contract type most often used, the government program management office in cooperation with its commercial counterpart looks to identify the manifestation of cost, schedule, and/or technical risk early enough to allow that risk to be handled as necessary.   Also, at the end of this process, which is only now being explored, is the usefulness of years of historical data across contract types, technologies, and suppliers that can be used to benefit the public interest by demonstrating which contractors perform better, to show the inherent risk associated with particular technologies through parametric methods, and a host of insights that can be derived through econometric project management trending and modeling.

So let’s assume that we can specify APIs in requesting the data in lieu of specifying that the customer can receive an application-agnostic file that can be read by any application that conforms with the data standard.  What is the difference?  My immediate observation is that is reverses the relationship in who owns the data.  In the case of the API the proprietary application becomes the gatekeeper.  In the case of an agnostic file structure it is open to everyone and the consumer owns the data.

In the API scenario large players can do what they want to limit competition and extensions to their functionality.  Since they can block box the manner in which data is structured, it also becomes increasingly difficult to make qualitative selections from the data.  The very example that Dave uses–the plethora of one-off mobile apps–usually must exist only in their own ecosystem.

So it seems to me that the real issue isn’t that Big Brother wants to control data structure.  What it comes down to is that specifying an open data structure defeats the ability of one or a group of solution providers from controlling the market through restrictions on accessing data.  This encourages maximum competition and innovation in the marketplace–Data Neutrality.

I look forward to additional information from Dave on this issue.  Each of the methods of achieving the end of Data Neutrality isn’t an end in itself.  Any method that is less structured and provides more flexibility is welcome.  I’m just not sure that we’re there yet with APIs.