Been attending conferences and meetings of late and came upon a discussion of the means of reducing data streams while leveraging Moore’s Law to provide more, better data. During a discussion with colleagues over lunch they asked if asking for more detailed data would provide greater insight. This led to a discussion of the qualitative differences in data depending on what information is being sought. My response to more detailed data was to respond: “well there has to be a pony in there somewhere.” This was greeted by laughter, but then I finished the point: more detailed data doesn’t necessarily yield greater insight (though it could and only actually looking at it will tell you that, particularly in applying the principle of KDD). But more detailed data that is based on a hierarchical structure will, at the least, provide greater reliability and pinpoint areas of intersection to detect areas of risk manifestation that is otherwise averaged out–and therefore hidden–at the summary levels.
Not to steal the thunder of new studies that are due out in the area of data later this spring but, for example, I am aware after having actually achieved lowest level integration for extremely complex projects through my day job, that there is little (though not zero) insight gained in predictive power between say, the control account level of a WBS and the work package level. Going further down to element of cost may, in the words of the character in the movie Still Alice, where “You may say that this falls into the great academic tradition of knowing more and more about less and less until we know everything about nothing.” But while that may be true for project management, that isn’t necessarily so when collecting parametrics and auditing the validity of financial information.
Rolling up data from individually detailed elements of a hierarchy is the proper way to ensure credibility. Since we are at the point where a TB of data has virtually the same marginal cost of a GB of data (which is vanishingly small to begin with), then the more the merrier in eliminating the abuse associated with human-readable summary reporting. Furthermore, I have long proposed through this blog and elsewhere, that the emphasis should be away from people, process, and tools, to people, process, and data. This rightly establishes the feedback loop necessary for proper development and project management. More importantly, the same data available through project management processes satisfy the different purposes of domains both within the organization, and of multiple external stakeholders.
This then leads us to the concept of integrated project management (IPM), which has become little more than a buzz-phrase, and receives a lot of hand waves, mostly by technology companies that want to push their tools–which are quickly becoming obsolete–while appearing forward leaning. This tool-centric approach is nothing more than marketing–focusing on what the software manufacturer would have us believe is important based on the functionality baked into their applications. One can see where this could be a successful approach, given the emphasis on tools in the PM triad. But, of course, it is self-limiting in a self-interested sort of way. The emphasis needs to be on the qualitative and informative attributes of available data–not of tool functionality–that meet the requirements of different data consumers while minimizing, to the extent possible, the number of data streams.
Thus, there are at least two main aspects of data that are important in understanding the utility of project management: early warning/predictiveness and credibility/traceability/fidelity. The chart attached below gives a rough back-of-the-envelope outline of this point, with some proposed elements, though this list is not intended to be exhaustive.
In order to capture data across the essential elements of project management, our data must demonstrate both a breadth and depth that allows for the discovery of intersections of the different elements. The weakness in the two-dimensional model above is that it treats each indicator by itself. But, when we combine, for example, IMS consecutive slips with other elements listed, the informational power of the data becomes many times greater. This tells us that the weakness in our present systems is that we treat the data as a continuity between autonomous elements. But we know that the project consists of discontinuities where the next level of achievement/progress is a function of risk. Thus, when we talk about IPM, the secret is in focusing on data that informs us what our systems are doing. This will require more sophisticated types of modeling.
You must be logged in to post a comment.