I recently returned from a travel and much of the discussion revolved around the issues of scalability and the use of data. What is clear is that the conversation at the project manager level is shifting from a long-running focus on reports and metrics to one focused on data and what can be learned from it. As with any technology, information technology exploits what is presented before it. Most recently, accelerated improvements in hardware and communications technology has allowed us to begin to collect and use ever larger sets of data.
The phrase “actionable” has been thrown around quite a bit in marketing materials, but what does this term really mean? Can data be actionable? No. Can intelligence derived from that data be actionable? Yes. But is all data that is transformed into intelligence actionable? No. Does it need to be? No.
There are also kinds and levels of intelligence, particularly as it relates to organizations and business enterprises. Here is a short list:
a. Competitive intelligence. This is intelligence derived from data that informs decision makers about how their organization fits into the external environment, further informing the development of strategic direction.
b. Business intelligence. This is intelligence derived from data that informs decision makers about the internal effectiveness of their organization both in the past and into the future.
c. Business analytics. The transformation of historical and trending enterprise data used to provide insight into future performance. This includes identifying any underlying drivers of performance, and any emerging trends that will manifest into risk. The purpose is to provide sufficient early warning to allow risk to be handled before it fully manifests, therefore keeping the effort being measured consistent with the goals of the organization.
Note, especially among those of you who may have a military background, that what I’ve outlined is a hierarchy of information and intelligence that addresses each level of an organization’s operations: strategic, operational, and tactical. For many decision makers, translating tactical level intelligence into strategic positioning through the operational layer presents the greatest challenge. The reason for this is that, historically, there often has been a break in the continuity between data collected at the tactical level and that being used at the strategic level.
The culprit is the operational layer, which has always been problematic for organizations and those individuals who find themselves there. We see this difficulty reflected in the attrition rate at this level. Some individuals cannot successfully make this transition in thinking. For example, in the U.S. Army command structure when advancing from the battalion to the brigade level, in the U.S. Navy command structure when advancing from Department Head/Staff/sea command to organizational or fleet command (depending on line or staff corps), and in business for those just below the C level.
Another way to look at this is through the traditional hierarchical pyramid, in which data represents the wider floor upon which each subsequent, and slightly reduced, level is built. In the past (and to a certain extent this condition still exists in many places today) each level has constructed its own data stream, with the break most often coming at the operational level. This discontinuity is then reflected in the inconsistency between bottom-up and top-down decision making.
Information technology is influencing and changing this dynamic by addressing the main reason for the discontinuity existing–limitations in data and intelligence capabilities. These limitations also established a mindset that relied on limited, summarized, and human-readable reporting that often was “scrubbed” (especially at the operational level) as it made its way to the senior decision maker. Since data streams were discontinuous, there were different versions of reality. When aspects of the human equation are added, such as selection bias, the intelligence will not match what the data would otherwise indicate.
As I’ve written about previously in this blog, the application of Moore’s Law in physical computing performance and storage has pushed software to greater needs in scaling in dealing with ever increasing datasets. What is defined as big data today will not be big data tomorrow.
Organizations, in reaction to this condition, have in many cases tended to simply look at all of the data they collect and throw it together into one giant pool. Not fully understanding what the data may say, a number of ad hoc approaches have been taken. In some cases this has caused old labor-intensive data mining and rationalization efforts to once again rise from the ashes to which they were rightly consigned in the past. On the opposite end, this has caused a reliance on pre-defined data queries or hard-coded software solutions, oftentimes based on what had been provided using human-readable reporting. Both approaches are self-limiting and, to a large extent, self-defeating. In the first case because the effort and time to construct the system will outlive the needs of the organization for intelligence, and in the second case, because no value (or additional insight) is added to the process.
When dealing with large, disparate sources of data, value is derived through that additional knowledge discovered through the proper use of the data. This is the basis of the concept of what is known as KDD. Given that organizations know the source and type of data that is being collected, it is not necessary to reinvent the wheel in approaching data as if it is a repository of Babel. No doubt the euphemisms, semantics, and lexicon used by software publishers differs, but quite often, especially where data underlies a profession or a business discipline, these elements can be rationalized and/or normalized given that the appropriate business cross-domain knowledge is possessed by those doing the rationalization or normalization.
This leads to identifying the characteristics* of data that is necessary to achieve a continuity from the tactical to the strategic level that will achieve some additional necessary qualitative traits such as fidelity, credibility, consistency, and accuracy. These are:
- Tangible. Data must exist and the elements of data should record something that correspondingly exists.
- Measurable. What exists in data must be something that is in a form that can be recorded and is measurable.
- Sufficient. Data must be sufficient to derive significance. This includes not only depth in data but also, especially in the case of marking trends, across time-phasing.
- Significant. Data must be able, once processed, to contribute tangible information to the user. This goes beyond statistical significance noted in the prior characteristic, in that the intelligence must actually contribute to some understanding of the system.
- Timely. Data must be timely so that it is being delivered within its useful life. The source of the data must also be consistently provided over consistent periodicity.
- Relevant. Data must be relevant to the needs of the organization at each level. This not only is a measure to test what is being measured, but also will identify what should be but is not being measured.
- Reliable. The sources of the data be reliable, contributing to adherence to the traits already listed.
This is the shorthand that I currently use in assessing a data requirements and the list is not intended to be exhaustive. But it points to two further considerations when delivering a solution.
First, at what point does the person cease to be the computer? Business analytics–the tactical level of enterprise data optimization–oftentimes are stuck in providing users with a choice of chart or graph to use in representing such data. And as noted by many writers, such as this one, no doubt the proper manner of representing data will influence its interpretation. But in this case the person is still the computer after the brute force computing is completed digitally. There is a need for more effective significance-testing and modeling of data, with built-in controls for selection bias.
Second, how should data be summarized to the operational and strategic levels so that “signatures” can be identified that inform information? Furthermore, it is important to understand what kind of data must supplement the tactical level data at those other levels. Thus, data streams are not only minimized to eliminate redundancy, but also properly aligned to the level of data intelligence.
*Note that there are other aspects of data characteristics noted by other sources here, here, and here. Most of these concern themselves with data quality and what I would consider to be baseline data traits, which need to be separately assessed and tested, as opposed to antecedent characteristics.