Out of Winter Woodshedding — Thinking about Project Risk and passing the “So What?” test

“Woodshedding” is a slang term in music, particularly in relation to jazz, in which the musician practices on an instrument usually outside of public performance, the purpose of which is to explore new musical insights without critical judgment.  This can be done with or without the participation of other musicians.  For example, much attention recently has been given to Bob Dylan’s Basement Tapes release.  Usually it is unusual to bother recording such music, given the purpose of improvisation and exploration, and so few additional examples of “basement tapes” exist from other notable artists.

So for me the holiday is a sort of opportunity to do some woodshedding.  The next step is to vet such thoughts on informal media, such as this blog, where the high standards involved in white and professional papers do not allow for informal dialogue and exchange of information, and thoughts are not yet fully formed and defensible.  My latest mental romps have been inspired by the movie about Alan Turing–The Imitation Game–and the British series The Bletchley Circle.  Thinking about one of the fathers of modern computing reminded me that the first use of the term “computer” referred to people.

As a matter of fact, though the terminology now refers to the digital devices that have insinuated themselves into every part of our lives, people continue to act as computers.  Despite fantastical fears surrounding AI taking our jobs and taking over the world, we are far from the singularity.  Our digital devices can only be programmed to go so far.  The so-called heuristics in computing today are still hard-wired functions, similar to replicating the methods used by a good con artist in “reading” the audience or the mark.  With the new technology in dealing with big data we have the ability to many of the methods originated by the people in the real life Bletchley Park of the Second World War.  Still, even with refinements and advances in the math, they provide great external information regarding the patterns and probable actions of the objects of the data, but very little insight into the internal cause-and-effect that creates the data, which still requires human intervention, computation, empathy, and insight.

Thus, my latest woodshedding has involved thinking about project risk.  The reason for this is the emphasis recently on the use of simulated Monte Carlo analysis in project management, usually focused on the time-phased schedule.  Cost is also sometimes included in this discussion as a function of resources assigned to the time-phased plan, though the fatal error in this approach is to fail to understand that technical achievement and financial value analysis are separate functions that require a bit more computation.

It is useful to understand the original purpose of simulated Monte Carlo analysis.  Nobel physicist Murray Gell-Mann, while working at RAND Corporation (Research and No Development) came up with the method with a team of other physicists (Jess Marcum and Keith Breuckner) to determine the probability of a number coming up from a set of seemingly random numbers.  For a full rendering of the theory and its proof Gell-Mann provides a good overview in his book The Quark and the Jaguar.  The insight derived from the insight of Monte Carlo computation has been to show that systems in the universe often organize themselves into patterns.  Instead of some event being probable by chance, we find that, given all of the events that have occurred to date, that there is some determinism which will yield regularities that can be tracked and predicted.  Thus, the use of simulated Monte Carlo analysis in our nether world of project management, which inhabits that void between microeconomics and business economics, provides us with some transient predictive probabilities given the information stream at that particular time, of the risks that have manifested and are influencing the project.

What the use of Monte Carlo and other such methods in identifying regularities do not do is to determine cause-and-effect.  We attempt to bridge this deficiency with qualitative risk in which we articulate risk factors to handle that are then tied to cost and schedule artifacts.  This is good as far as it goes.  But it seems that we have some of this backward.  Oftentimes, despite the application of these systems to project management, we still fail to overcome the risks inherent in the project, which then require a redefinition of project goals.  We often attribute these failures to personnel systems and there are no amount of consultants all too willing to sell the latest secret answer to project success.  Yet, despite years of such consulting methods applied to many of the same organizations, there is still a fairly consistent rate of failure in properly identifying cause-and-effect.

Cause-and-effect is the purpose of all of our metrics.  Only by properly “computing” cause-and-effect will we pass the “So What?” test.  Our first forays into this area involve modeling.  Given enough data we can model our systems and, when the real-time results of our in-time experiments play out to approximate what actually happens then we know that our models are true.  Both economists and physicists (well, the best ones) use the modeling method.  This allows us to get the answer even if not entirely understanding the question of the internal workings that lead to the final result.  As in Douglas Adams’ answer to the secret of life, the universe, and everything where the answer is “42,” we can at least work backwards.  And oftentimes this is what we are left, which explains the high rate of failure in time.

While I was pondering this reality I came across this article in Quanta magazine outlining the new important work of the MIT physicist Jeremy England entitled “A New Physics Theory of Life.”  From the perspective of evolutionary biology, this pretty much shows that not only does the Second Law of Thermodynamics support the existence and evolution of life (which we’ve known as far back as Schrodinger), but probably makes life inevitable under a host of conditions.  In relation to project management and risk, it was this passage that struck me most forcefully:

“Chris Jarzynski, now at the University of Maryland, and Gavin Crooks, now at Lawrence Berkeley National Laboratory. Jarzynski and Crooks showed that the entropy produced by a thermodynamic process, such as the cooling of a cup of coffee, corresponds to a simple ratio: the probability that the atoms will undergo that process divided by their probability of undergoing the reverse process (that is, spontaneously interacting in such a way that the coffee warms up). As entropy production increases, so does this ratio: A system’s behavior becomes more and more “irreversible.” The simple yet rigorous formula could in principle be applied to any thermodynamic process, no matter how fast or far from equilibrium. “Our understanding of far-from-equilibrium statistical mechanics greatly improved,” Grosberg said. England, who is trained in both biochemistry and physics, started his own lab at MIT two years ago and decided to apply the new knowledge of statistical physics to biology.”

No project is a closed system (just as the earth is not on a larger level).  The level of entropy in the system will vary by the external inputs that will change it:  effort, resources, and technical expertise.  As I have written previously (and somewhat controversially), there is both chaos and determinism in our systems.  An individual or a system of individuals can adapt to the conditions in which they are placed but only to a certain level.  It is non-zero that an individual or system of individuals can largely overcome the risks realized to date, but the probability of that occurring is vanishingly small.  The chance that a peasant will be a president is the same.  The idea that it is possible, even if vanishingly so, keeps the class of peasants in line so that those born with privilege can continue to reassuringly pretend that their success is more than mathematics.

When we measure risk what we are measuring is the amount of entropy in the system that we need to handle, or overcome.  We do this by borrowing energy in the form of resources of some kind from other, external systems.  The conditions in which we operate may be ideal or less than ideal.

What England’s work combined with his predecessors’ seem to suggest is that the Second Law almost makes life inevitable except where it is impossible.  For astrophysics this makes the entire Rare Earth hypothesis a non sequitur.  That is, wherever life can develop it will develop.  The life that does develop is fit for its environment and continues to evolve as changes to the environment occur.  Thus, new forms of organization and structure are found in otherwise chaotic systems as a natural outgrowth of entropy.

Similarly, when we look at more cohesive and less complex systems, such as projects, what we find are systems that adapt and are fit for the environments in which they are conceived.  This insight is not new and has been observed for organizations using more mundane tools, such as Deming’s red bead experiment.  Scientifically, however, we now have insight into the means of determining what the limitations of success are given the risk and entropy that has already been realized, against the needed resources that are needed to bring the project within acceptable ranges of success.  This information goes beyond simply stating the problem, leaving the computing to the person and thus passes the “So What?” test.

Finding Wisdom — Stephen Jay Gould in “The Mismeasure of Man”

Stephen Jay Gould

Perhaps no modern thinker among the modern scientific community from the late 1970s into the new century pushed the boundaries of interpretation and thought regarding evolutionary biology and paleontology more significantly than Stephen Jay Gould.  An eminent scholar himself–among evolutionary biologists his technical work Ontogeny and Phylogeny (1977) is considered one of the most significant works in the field and he is considered to be among the most important historians of science in the late 20th century–he was the foremost popularizer of science during his generation (with the possible exception of Richard Dawkins and Carl Sagan) who used his position to advance scientific knowledge, critical thinking, and to attack pseudoscientific, racist, and magical thinking which misused and misrepresented scientific knowledge and methods.

His concepts of punctuated equilibrium, spandrels, and the Panglossian Paradigm pushed other evolutionary biologists in the field to rise to new heights in considering and defending their own applications of neo-Darwinian theory, prompting (sometimes heated) debate.  These ideas continue to be controversial in the evolutionary community with, it seems, most of the objections being based on the fear that they will be misused by non-scientists against evolution itself, and it is true that creationists and other pseudoscientists–aided and abetted by the scientifically illiterate popular press–misrepresented the so-called “Darwin Wars” as being more significant than they really were.  But many of his ideas were reconciled and resolved into a new synthesis within the science of evolution.  Thus, his insights, based as they were in the scientific method and within proven theory, epitomized the very subject that he popularized–that nothing is ever completely settled in science, that all areas of human understanding are open to inquiry and–perhaps–revision, even if slight.

Having established himself as a preeminent science historian, science popularizer, scholar in several fields, and occasional iconoclast, Gould focused his attention on an area that well into the late 20th century was rife with ideology, prejudice, and pseudoscience–the issue of human intelligence and its measurement.  As Darwin learned over a hundred years before, it is one thing to propose that natural selection is the agent of evolution, it is another to then demonstrate that the human species descended from other primate ancestors, and the manner in which sexual selection plays a role in human evolution: for some well entrenched societal interests and specialists it is one step too far.  Gould’s work was attacked, but it has withstood these attacks and criticisms, and stands as a shining example of using critical thinking and analytical skills in striking down an artifact of popular culture and bad social science.

In The Mismeasure of Man, Gould begins his work by surveying the first scientific efforts at understanding human intelligence by researchers such as Louis Agassiz and Paul Broca, among others, who studied human capabilities through the now-defunct science of craniometry.  I was reminded, when I first picked up the book, of Carl Sagan’s collection of writings in the 1979 book, Broca’s Brain, in which some of the same observations are made.  What Gould demonstrates is that the racial and sexual selection bias of the archetypes chosen by the researchers, in particular Samuel George Morton (1799-1851), provided them with the answers they wanted to find–that their methodology was biased, and therefore, invalid from the start.  In particular, the differences in the skulls of Caucasians (of a particular portion of Europe), Black people (without differentiating ethnic or geographical differences), and Mongolians (Asian peoples without differentiation) in identifying different human “species” lacked rigor and was biased in its definitions from the outset.

In order to be fair, a peer-reviewed paper challenged Gould’s assertion that Morton may have fudged his findings on cranial measurements, since the researcher used bird seed (or iron pellets depending on the source) as the basis for measurement, and found, in a sample of some of the same skulls, (combined with a survey from 1988) that Morton was largely accurate in his measures.  The research, however, was unable to undermine the remainder of Gould’s thesis while attempting to resurrect the integrity of Morton in light of his own, largely pre-scientific time.  I can understand the point made by Gould’s critics regarding Morton that it is not necessarily constructive to apply modern methodological methods–or imply dishonesty–to those early pioneers whose works has led to modern scientific understanding.  But as an historian I also understand that when reading Gibbon on the Roman Empire that we learn a great deal about the prejudices of 18th century British society–perhaps more than we learn of the Romans.  Gibbon and Morton, as with most people, were not consciously aware of their own biases–or that they were biases.  This is the reason for modern research and methodological standards in academic fields–and why human understanding is always “revisionist” to use a supposed pejorative that I heard used by one particularly ignorant individual several years ago.  Gibbon showed the way of approaching and writing about history.  His work would not pass editorial review today, but the reason why he is so valued is that he is right in many of his observations and theses.  The same cannot be said for Morton, who seemed motivated by the politics of justifying black slavery, which is why Gould treats him so roughly, particularly given that some of Morton’s ideas still find comfort in many places in our own time.  In light of subsequent research, especially the human genome project, Gould proves out right which, after all, is the measure that counts.

But that is just the appetizer.  Gould then takes on the basis of IQ (intelligence quotient), g factor (the general intelligence factor), and the heritability of intelligence to imply human determinism, especially generalized among groups.  He traces the original application of IQ tests developed by Alfred Binet and Theodore Simon to the introduction of universal education in France and the need to identify children with learning disabilities and those who required remediation by grade and age group.  He then surveys how psychologist Lewis Terman of Stanford modified the test and transformed its purpose in order to attempt to find an objective basis for determining human intelligence.  In critiquing this transformation Gould provides examples of the more obviously (to modern eyes) biased questions on the test, and then effectively destroys the statistical basis for the correlations of the test in being able to determine any objective measure of g.  He demonstrates that the correlations established by the psychological profession to establish “g” are both statistically and logically questionable and that they commit the logical fallacy of reification–that is, they take an abstract measure and imbue it with a significance that it cannot possess as if it were an actual physical entity or “thing.”

Gould demonstrates that the variability of the measurements within groups, the clustering of results within the tests that identify distinct aptitudes, and the variability of results across time for the same individuals given changes in circumstances of material condition, education, and emotional maturity, renders “g” an insignificant measure.  The coup de grace in the original edition–a trope still often pulled out as the last resort by defenders of human determinism and IQ–is in Gould’s analysis of the work of Cyril Burt, the oft-cited researcher of twin studies, who published fraudulent works that asserted that IQ was highly heritable and not affected by environment.  That we still hear endless pontificating on “nature vs. nurture” debates, and that Stanford-Binet and other tests are still used as a basis for determining a measure of “intelligence” owes more to societal bias, and the still pseudo-scientific methodologies of much of the psychological profession, than scientific and intellectual honesty.

The core of Gould’s critique is to effectively discredit the concept of biological determinism which he defines as “the abstraction of intelligence as a single entity, its location within the brain, its quantification as one number for each individual, and the use of these numbers to rank people in a single series of worthiness, invariably to find that oppressed and disadvantaged groups—races, classes, or sexes—are innately inferior and deserve their status.”

What Stephen Jay Gould demonstrates in The Mismeasure of Man most significantly then, I think, is that human beings–particularly those with wealth, power, and influence, or who are part of a societally favored group–demonstrate an overwhelming desire to differentiate themselves from others and will go to great lengths to do so to their own advantage.  This desire includes the misuse of science, whatever the cost to truth or integrity, in order to demonstrate that there is an organic or heritable basis for their favored position relative to others in society when, in reality, there are more complex–and perhaps more base and remedial–reasons.  Gould shows how public policy, educational focus, and discriminatory practices were influenced by the tests of immigrant and minority groups to deny them access to many of the benefits of the economic system and society.  Ideology was the driving factor in the application of these standardized tests, which served the purposes of societal and economic elites to convince disenfranchised groups that they deserved their inferior status.  The label of “science” provided these tainted judgments with just the right tinge of respectability that they needed to overcome skepticism and opposition.

A few years after the publication of Gould’s work a new example of the last phenomenon described above emerged with the publication of the notorious The Bell Curve (1994) by Richard Herrnstein and Charles Murray–the poster child of the tradition harking back to Herbert Spencer’s Social Statics of elites funding self-serving pseudo-science and–another word will not do–bullshit.  While Spencer could be forgiven his errors given his time and scientific limitations, Herrnstein and Murray, who have little excuse, used often contradictory and poorly correlated (and causative) statistical methods to argue for a race-based argument of biological determinism.  Once again, Gould in the 1996 revision to his original work, dealt with these fallacies directly, demonstrating in detail the methodological errors in their work and the overreach inherit in their enterprise–another sad example of bias misusing knowledge as the intellectual basis to oppress other people and, perhaps more egregiously, to abandon coming to terms with the disastrous actions that American society has had on one specific group of people because of the trivial difference of skin color.

With the yeoman work of Stephen Jay Gould to discredit pseudo-scientific ideas and the misuse of statistical methodology to pigeonhole and classify people–to misuse socio-biology and advance self-serving theories of human determinism–the world has been provided the example that even the best financed and well entrenched elites cannot stop the advance of knowledge and information.  They will try–using more sophisticated methods of disinformation and advertising–but over time those efforts will be defeated.  It will happen because scientific projects like the Human Genome Project have already demonstrated that there is only one race–the human race–and that we are all tied together by common ancestors.  The advantages that we realize over each other at any point in time is ephemeral.  The knowledge regarding variability in the human species acknowledges differences in the heritable characteristics in individuals, but that knowledge implies nothing about our relative worth to one another, nor is it a moral judgment rendered from higher authority that justifies derision, stigma, ridicule, discrimination, or reduced circumstances.  It will happen because in our new age information, once transmitted, cannot be retracted–it is out there forever.  There is much wisdom here.  It is up to each of us to recognize it, and inform our actions as a result of it.