“Woodshedding” is a slang term in music, particularly in relation to jazz, in which the musician practices on an instrument usually outside of public performance, the purpose of which is to explore new musical insights without critical judgment. This can be done with or without the participation of other musicians. For example, much attention recently has been given to Bob Dylan’s Basement Tapes release. Usually it is unusual to bother recording such music, given the purpose of improvisation and exploration, and so few additional examples of “basement tapes” exist from other notable artists.
So for me the holiday is a sort of opportunity to do some woodshedding. The next step is to vet such thoughts on informal media, such as this blog, where the high standards involved in white and professional papers do not allow for informal dialogue and exchange of information, and thoughts are not yet fully formed and defensible. My latest mental romps have been inspired by the movie about Alan Turing–The Imitation Game–and the British series The Bletchley Circle. Thinking about one of the fathers of modern computing reminded me that the first use of the term “computer” referred to people.
As a matter of fact, though the terminology now refers to the digital devices that have insinuated themselves into every part of our lives, people continue to act as computers. Despite fantastical fears surrounding AI taking our jobs and taking over the world, we are far from the singularity. Our digital devices can only be programmed to go so far. The so-called heuristics in computing today are still hard-wired functions, similar to replicating the methods used by a good con artist in “reading” the audience or the mark. With the new technology in dealing with big data we have the ability to many of the methods originated by the people in the real life Bletchley Park of the Second World War. Still, even with refinements and advances in the math, they provide great external information regarding the patterns and probable actions of the objects of the data, but very little insight into the internal cause-and-effect that creates the data, which still requires human intervention, computation, empathy, and insight.
Thus, my latest woodshedding has involved thinking about project risk. The reason for this is the emphasis recently on the use of simulated Monte Carlo analysis in project management, usually focused on the time-phased schedule. Cost is also sometimes included in this discussion as a function of resources assigned to the time-phased plan, though the fatal error in this approach is to fail to understand that technical achievement and financial value analysis are separate functions that require a bit more computation.
It is useful to understand the original purpose of simulated Monte Carlo analysis. Nobel physicist Murray Gell-Mann, while working at RAND Corporation (Research and No Development) came up with the method with a team of other physicists (Jess Marcum and Keith Breuckner) to determine the probability of a number coming up from a set of seemingly random numbers. For a full rendering of the theory and its proof Gell-Mann provides a good overview in his book The Quark and the Jaguar. The insight derived from the insight of Monte Carlo computation has been to show that systems in the universe often organize themselves into patterns. Instead of some event being probable by chance, we find that, given all of the events that have occurred to date, that there is some determinism which will yield regularities that can be tracked and predicted. Thus, the use of simulated Monte Carlo analysis in our nether world of project management, which inhabits that void between microeconomics and business economics, provides us with some transient predictive probabilities given the information stream at that particular time, of the risks that have manifested and are influencing the project.
What the use of Monte Carlo and other such methods in identifying regularities do not do is to determine cause-and-effect. We attempt to bridge this deficiency with qualitative risk in which we articulate risk factors to handle that are then tied to cost and schedule artifacts. This is good as far as it goes. But it seems that we have some of this backward. Oftentimes, despite the application of these systems to project management, we still fail to overcome the risks inherent in the project, which then require a redefinition of project goals. We often attribute these failures to personnel systems and there are no amount of consultants all too willing to sell the latest secret answer to project success. Yet, despite years of such consulting methods applied to many of the same organizations, there is still a fairly consistent rate of failure in properly identifying cause-and-effect.
Cause-and-effect is the purpose of all of our metrics. Only by properly “computing” cause-and-effect will we pass the “So What?” test. Our first forays into this area involve modeling. Given enough data we can model our systems and, when the real-time results of our in-time experiments play out to approximate what actually happens then we know that our models are true. Both economists and physicists (well, the best ones) use the modeling method. This allows us to get the answer even if not entirely understanding the question of the internal workings that lead to the final result. As in Douglas Adams’ answer to the secret of life, the universe, and everything where the answer is “42,” we can at least work backwards. And oftentimes this is what we are left, which explains the high rate of failure in time.
While I was pondering this reality I came across this article in Quanta magazine outlining the new important work of the MIT physicist Jeremy England entitled “A New Physics Theory of Life.” From the perspective of evolutionary biology, this pretty much shows that not only does the Second Law of Thermodynamics support the existence and evolution of life (which we’ve known as far back as Schrodinger), but probably makes life inevitable under a host of conditions. In relation to project management and risk, it was this passage that struck me most forcefully:
“Chris Jarzynski, now at the University of Maryland, and Gavin Crooks, now at Lawrence Berkeley National Laboratory. Jarzynski and Crooks showed that the entropy produced by a thermodynamic process, such as the cooling of a cup of coffee, corresponds to a simple ratio: the probability that the atoms will undergo that process divided by their probability of undergoing the reverse process (that is, spontaneously interacting in such a way that the coffee warms up). As entropy production increases, so does this ratio: A system’s behavior becomes more and more “irreversible.” The simple yet rigorous formula could in principle be applied to any thermodynamic process, no matter how fast or far from equilibrium. “Our understanding of far-from-equilibrium statistical mechanics greatly improved,” Grosberg said. England, who is trained in both biochemistry and physics, started his own lab at MIT two years ago and decided to apply the new knowledge of statistical physics to biology.”
No project is a closed system (just as the earth is not on a larger level). The level of entropy in the system will vary by the external inputs that will change it: effort, resources, and technical expertise. As I have written previously (and somewhat controversially), there is both chaos and determinism in our systems. An individual or a system of individuals can adapt to the conditions in which they are placed but only to a certain level. It is non-zero that an individual or system of individuals can largely overcome the risks realized to date, but the probability of that occurring is vanishingly small. The chance that a peasant will be a president is the same. The idea that it is possible, even if vanishingly so, keeps the class of peasants in line so that those born with privilege can continue to reassuringly pretend that their success is more than mathematics.
When we measure risk what we are measuring is the amount of entropy in the system that we need to handle, or overcome. We do this by borrowing energy in the form of resources of some kind from other, external systems. The conditions in which we operate may be ideal or less than ideal.
What England’s work combined with his predecessors’ seem to suggest is that the Second Law almost makes life inevitable except where it is impossible. For astrophysics this makes the entire Rare Earth hypothesis a non sequitur. That is, wherever life can develop it will develop. The life that does develop is fit for its environment and continues to evolve as changes to the environment occur. Thus, new forms of organization and structure are found in otherwise chaotic systems as a natural outgrowth of entropy.
Similarly, when we look at more cohesive and less complex systems, such as projects, what we find are systems that adapt and are fit for the environments in which they are conceived. This insight is not new and has been observed for organizations using more mundane tools, such as Deming’s red bead experiment. Scientifically, however, we now have insight into the means of determining what the limitations of success are given the risk and entropy that has already been realized, against the needed resources that are needed to bring the project within acceptable ranges of success. This information goes beyond simply stating the problem, leaving the computing to the person and thus passes the “So What?” test.