Recently I have been engaged in an exploration and discussion regarding the utilization of large amounts of data and how applications derive importance from that data. In an on-line discussion with the ever insightful Dave Gordon, I first postulated that we need to transition into a world where certain classes of data are open so that the qualitative content can be normalized. This is what for many years was called the Integrated Digital Environment (IDE for short). Dave responded with his own post at the AITS.org blogging alliance, countering that while such standards are necessary in very specific and limited applications, that modern APIs provide most of the solution. I then responded directly to Dave here, countering that IDE is nothing more than data neutrality. Then also at AITS.org I expanded on what I proposed to be a general approach in understanding big data, noting the dichotomy in the software approaches that organize the external characteristics of the data to generalize systems and note trends, as opposed to those that are focused on the qualitative content within the data.
It should come as no surprise then, given these differences in approaching data, that we also find similar differences in the nature of applications that are found on the market. With the recent advent of on-line and hosted solutions, there are literally thousands of applications in some categories of software that propose to do one thing with data, or that are focused one-trick pony applications that can be mixed and matched to somehow provide an integrated solution.
There are several problems with this sudden explosion of applications of this nature.
The first is in the very nature of the explosion. This is a classic tech bubble, albeit limited to a particular segment of the software market, and it will soon burst. As soon as consumers find that all of that information traveling over the web with the most minimal of protections is compromised by the next trophy hack, or that too many software providers have entered the market prematurely–not understanding the full needs of their targeted verticals–it will hit like the last one in 2000. It only requires a precipitating event that triggers a tipping point.
You don’t have to take my word for it. Just type in a favorite keyword into your browser now (and I hope you’re using VPN doing it) for a type of application for which you have a need–let’s say “knowledge base” or “software ticket systems.” What you will find is that there are literally hundreds if not thousands of apps built for this function. You cannot test them all. Basic information economics, however, dictates that you must invest some effort in understanding the capabilities and limitations of the systems on the market. Surely there are a couple of winners out there. But basic economics also dictates that 95% of those presently in the market will be gone in short order. Being the “best” or the “best value” does not always win in this winnowing out. Certainly chance, the vagaries of your standing in the search engine results, industry contacts–virtually any number of factors–will determine who is still standing and who is gone a year from now.
Aside from this obvious problem with the bubble itself, the approach of the application makers harkens back to an earlier generation of one-off applications that attempt to achieve integration through marketing while actually achieving, at best, only old-fashioned interfacing. In the world of project management, for example, organizations can little afford to revert to the division of labor, which is what would be required to align with these approaches in software design. It’s almost as if, having made their money in an earlier time, that software entrepreneurs cannot extend themselves beyond their comfort zones in taking advantage of the last TEN software generations that provide new, more flexible approaches to data optimization. All they can think to do is party like it’s 1995.
For the new paradigm in project management is to get beyond the traditional division of labor. For example, is scheduling such a highly specialized discipline rising to the level of a profession that it is separate from all of the other aspects of project management? Of course not. Scheduling is a discipline–a sub-specialty actually–that is inextricably linked to all other aspects of project management in a continuum. The artifacts of the process of establishing project systems and controls constitutes the project itself.
No doubt there are entities and companies that still ostensibly organize themselves into specialties as they did twenty years ago: cost analysts, schedule analysts, risk management specialists, among others. But given that the information from the these systems: schedule, cost management, project financial management, risk management, technical performance, and all the rest, can be integrated at the appropriate level of their interrelationships to provide us a cohesive, holistic view of the complex system that we call a project, is such division still necessary? In practice the industry has already moved to position itself to integration, realizing the urgency of making the shift.
For example, to utilize an application to query cost management information in 1995 was a significant achievement during the first wave of software deployment that mimicked the division of labor. In 2015, not so much. Introducing a one-trick pony EVM “tool” in 2015 is laziness–hoping to turn back the clock in ignoring the obsolescence of such an approach–regardless of which slick new user interface is selected.
I recently attended a project management meeting of senior government and industry representatives. During one of my side sessions I heard a colleague propose the discipline of Project Management Analyst in lieu of previously stove-piped specialties. His proposal is a breath of fresh air in an industry that develops and manufacturers the latest aircraft and space technology, but has hobbled itself with systems and procedures designed for an earlier era that no longer align with the needs of doing business. I believe the timely deployment of systems has suffered as a result during this period of transition.
Software must lead, and accelerate the transition to the new integration paradigm.
Thus, in 2015 the choice is not between data that adheres to conventions of data neutrality, or to those that utilize data access via APIs, but in favor of applications that do both.
It is not between different hard-coded applications that provide the old “what-you-see-is-what-you-get” approach. It is instead between such limited hard-coded applications, and those that provide flexibility so that business managers can choose among a nearly unlimited pallet of choices of how and which data, converted into information, is available to the user or classes of user based on their role and need to know; aggregated at the appropriate level of detail for the consumer to derive significance from the information being presented.
It is not between “best-of-breed” and “mix-and-match” solutions that leverage interfaces to achieve integration. It is instead between such solution “consortiums” that drive up implementation and sustainment costs, bringing with them high overhead, against those that achieve integration by leveraging the source of the data itself, reducing the number of applications that need to be managed, allowing data to be enriched in an open and flexible environment, achieving transformation into useful information.
Finally, the choice isn’t among applications that save their attributes in a proprietary format so that the customer must commit themselves to a proprietary solution. Instead, it is between such restrictive applications and those that open up data access, clearly establishing that it is the consumer that owns the data.
Note: I have made minor changes from the original version of this post for purposes of clarification.
Hi Nick,
You might find this article interesting.
http://www.newsweek.com/plan-quit-big-data-might-tell-your-boss-you-do-298026
Appropriate data safeguards are still necessary; that said, not all data requires the same safeguards. But if the goal of gathering all of this data – internal, external, environmental, and behavioral – is to provide actionable recommendations to managers, then I think that cognitive computing and natural language technologies are getting sophisticated enough to be part of the solution. Commercial predictive analytic applications are getting very good at collecting and sharing data in ways that don’t require an understanding of the underlying data models – they just need to know how to frame the question. Retention and succession management are just one small example. Autonomous equity and commodity trading applications are another. A friend works for a company in China that is developing a Cloud-based app for breast cancer detection, tapping hundreds of millions of medical records and medical imaging data sets from disparate sources.
Workday customers will be updated to release 24 over the weekend of March 14-15. I expect a number of them will begin tinkering with the Insight application referenced in this article in short order. Maybe their managers will even act on those recommendations, but that’s another matter, entirely.
http://dilbert.com/strip/2007-05-16
LikeLike