The title in this case is from the Leonard Cohen song.
Over the last few months I’ve come across this issue quite a bit and it goes to the heart of where software technology is leading us. The basic question that underlies this issue can be boiled down into the issue of whether software should be thought of as a set of “tools” or an overarching solution that can handle data in a way that the organization requires. It is a fundamental question because what we call Big Data–despite all of the hoopla–is really a relative term that changes with hardware, storage, and software scalability. What was Big Data in 1997 is not Big Data in 2016.
As Moore’s Law expands scalability at lower cost, organizations and SMEs are finding that the dedicated software tools at hand are insufficient to leverage the additional information that can be derived from that data. The reason for this is simple. A COTS tools publisher will determine the functionality required based on a structured set of data that is to be used and code to that requirement. The timeframe is usually extended and the approach highly structured. There are very good reasons for this approach in particular industries where structure is necessary and the environment is fairly stable. The list of industries that fall into this category is rapidly becoming smaller. Thus, there is a large gap that must be filled by workarounds, custom code, and suboptimized use of Excel. Organizations and people cannot wait until the self-styled software SMEs get around to providing that upgrade two years from now so that people can do their jobs.
Thus, the focus must be shifted to data and the software technologies that maximize its immediate exploitation for business purposes to meet organizational needs. The key here is the arise of Fourth Generation applications that leverage object oriented programming language that most closely replicate the flexibility of open source. What this means is that in lieu of buying a set of “tools”–each focused on solving a specific problem stitched together by a common platform or through data transfer–that software that deals with both data and UI in an agnostic fashion is now available.
The availability of flexible Fourth Generation software is of great concern, as one would imagine, to incumbents who have built their business model on defending territory based on a set of artifacts provided in the software. Oftentimes these artifacts are nothing more than automatically filled in forms that previously were filled in manually. That model was fine during the first and second waves of automation from the 1980s and 1990s, but such capabilities are trivial in 2016 given software focused on data that can be quickly adapted to provide functionality as needed. What this development also does is eliminate and make trivial those old checklists that IT shops used to send out in a lazy way of assessing relative capabilities of software to simplify the competitive range.
Tools restrict themselves to a subset of data by definition to provide a specific set of capabilities. Software that expands to include any set of data and allows that data to be displayed and processed as necessary through user configuration adapts itself more quickly and effectively to organizational needs. They also tend to eliminate the need for multiple “best-of-breed” toolset approaches that are not the best of any breed, but more importantly, go beyond the limited functionality and ways of deriving importance from data found in structured tools. The reason for this is that the data drives what is possible and important, rather than tools imposing a well-trod interpretation of importance based on a limited set of data stored in a proprietary format.
An important effect of Fourth Generation software that provides flexibility in UI and functionality driven by the user is that it puts the domain SME back in the driver’s seat. This is an important development. For too long SMEs have had to content themselves with recommending and advocating for functionality in software while waiting for the market (software publishers) to respond. Essential business functionality with limited market commonality often required that organizations either wait until the remainder of the market drove software publishers to meet their needs, finance expensive custom development (either organic or contracted), or fill gaps with suboptimized and ad hoc internal solutions. With software that adapts its UI and functionality based on any data that can be accessed, using simple configuration capabilities, SMEs can fill these gaps with a consistent solution that maintains data fidelity and aids in the capture and sustainability of corporate knowledge.
Furthermore, for all of the talk about Agile software techniques, one cannot implement Agile using software languages and approaches that were designed in an earlier age that resists optimization of the method. Fourth Generation software lends itself most effectively to Agile since configuration using simple object oriented language gets us to the ideal–without a reliance on single points of failure–of releasable solutions at the end of a two-week sprint. No doubt there are developers out there making good money that may challenge this assertion, but they are the exceptions to the rule that prove the point. An organization should be able to optimize the pool of contributors to solution development and rollout in supporting essential business processes. Otherwise Agile is just a pretext to overcome suboptimized developmental approaches, software languages, and the self-interest of developers that can’t plan or produce a releasable product in a timely manner within budgetary constraints.
In the end the change in mindset from tools to data goes to the issue of who owns the data: the organization that creates and utilizes the data (the customer), or the proprietary software tool publishers? Clearly the economics will win out in favor of the customer. It is time to displace “tools” thinking.
Note: I’ve revised the title of the blog for clarity.