Both Sides Now — The Value of Data Exploration

Over the last several months I have authored a number of stillborn articles that just did not live up to the standards that I set for this blog site. After all, sometimes we just have nothing important to add to the conversation. In a world dominated by narcissism, it is not necessary to constantly have something to say. Some reflection and consideration are necessary, especially if one is to be as succinct as possible.

A quote ascribed to Woodrow Wilson, which may be apocryphal, though it does appear in two of his biographies, was in response to being lauded by someone for making a number of short, succinct, and informative speeches. When asked how he was able to do this, President Wilson is supposed to have replied:

“It depends. If I am to speak ten minutes, I need a week for preparation; if fifteen minutes, three days; if half an hour, two days; if an hour, I am ready now.”

An undisciplined mind has a lot to say about nothing in particular with varying degrees of fidelity to fact or truth. When in normal conversation we most often free ourselves from the discipline expected for more rigorous thinking. This is not necessarily a bad thing if we are saying nothing of consequence and there are gradations, of course. Even the most disciplined mind gets things wrong. We all need editors and fact checkers.

While I am pulling forth possibly apocryphal quotes, the one most applicable that comes to mind is the comment by Hemingway as told by his deckhand in Key West and Cuba, Arnold Samuelson. Hemingway was supposed to have given this advice to the aspiring writer:

“Don’t get discouraged because there’s a lot of mechanical work to writing. There is, and you can’t get out of it. I rewrote the first part of A Farewell to Arms at least fifty times. You’ve got to work it over. The first draft of anything is shit. When you first start to write you get all the kick and the reader gets none, but after you learn to work it’s your object to convey everything to the reader so that he remembers it not as a story he had read but something that happened to himself.”

Though it deals with fiction, Hemingway’s advice applies to any sort of writing and rhetoric. Dr. Roger Spiller, who more than anyone mentored me as a writer and historian, once told me, “Writing is one of those skills that, with greater knowledge, becomes harder rather than easier.”

As a result of some reflection, over the last few months, I had to revisit the reason for the blog. Thus, this is still its purpose: it is a way to validate ideas and hypotheses with other professionals and interested amateurs in my areas of interest. I try to keep uninformed opinion in check, as all too many blogs turn out to be rants. Thus, a great deal of research goes into each of these posts, most from primary sources and from interactions with practitioners in the field. Opinions and conclusions are my own, and my reasoning for good or bad are exposed for all the world to see and I take responsibility for them.

This being said, part of my recent silence has also been due to my workload in–well–the effort involved in my day job of running a technology company, and in my recent role, since late last summer, as the Managing Editor of the College of Performance Management’s publication known as the Measurable News. Our emphasis in the latter case has been to find new contributions to the literature regarding business analytics and to define the concept of integrated project, program, and portfolio management. Stepping slightly over the line to make a pitch, I recommend anyone interested in contributing to the publication to submit an article. The submission guidelines can be found here.

Both Sides Now: New Perspectives

That out of the way, I recently saw, again on the small screen, the largely underrated movie about Neil Armstrong and the Apollo 11 moon landing, “First Man”, and was struck by this scene:

Unfortunately, the first part of the interview has been edited out of this clip and I cannot find a full scene. When asked “why space” he prefaces his comments by stating that the atmosphere of the earth seems to be so large from the perspective of looking at it from the ground but that, having touched the edge of space previously in his experience as a test pilot of the X15, he learned that it is actually very thin. He then goes on to posit that looking at the earth from space will give us a new perspective. His conclusion to this observation is then provided in the clip.

Armstrong’s words were prophetic in that the space program provided a new perspective and a new way of looking at things that were in front of us the whole time. Our spaceship Earth is a blue dot in a sea of space and, at least for a time, the people of our planet came to understand both our loneliness in space and our interdependence.

Earth from Apollo 8. Photo courtesy of NASA.

 

The impact of the Apollo program resulted in great strides being made in environmental and planetary sciences, geology, cosmology, biology, meteorology, and in day-to-day technology. The immediate effect was to inspire the environmental and human rights movements, among others. All of these advances taken together represent a new revolution in thought equal to that during the initial Enlightenment, one that is not yet finished despite the headwinds of reaction and recidivism.

It’s Life’s Illusions I Recall: Epistemology–Looking at and Engaging with the World

In his book Darwin’s Dangerous Idea, Daniel Dennett posited that what was “dangerous” about Darwinism is that it acts as a “universal acid” that, when touching other concepts and traditions, transforms them in ways that change our world-view. I have accepted this position by Dennett through the convincing argument he makes and the evidence in front of us, and it is true that Darwinism–the insight in the evolution of species over time through natural selection–has transformed our perspective of the world and left the old ways of looking at things both reconstructed and unrecognizable.

In his work, Time’s Arrow, Time’s Cycle, Stephen Jay Gould noted that Darwinism is part of one of the three great reconstructions of human thought that, in quoting Sigmund Freud, where “Humanity…has had to endure from the hand of science…outrages upon its naive self-love.” These outrages include the Copernican revolution that removed the Earth from the center of the universe, Darwinism and the origin of species, including the descent of humanity, and what John McPhee, coined as the concept of “deep time.”

But–and there is a “but”–I would propose that Darwinism and the other great reconstructions noted are but different ingredients of a larger and more broader, though compatible, type of innovation in the way the world is viewed and how it is approached–a more powerful universal acid. That innovation in thought is empiricism.

It is this approach to understanding that eats through the many ills of human existence that lead to self-delusion and folly. Though you may not know it, if you are in the field of information technology or any of the sciences, you are part of this way of viewing and interacting with the world. Married with rational thinking, this epistemology–coming from the perspectives of the astronomical observations of planets and other heavenly bodies by Charles Sanders Peirce, with further refinements by William James and John Dewey, and others have come down to us in what is known as Pragmatism. (Note that the word pragmatism in this context is not the same as the more generally used colloquial form of the word. For this type of reason Peirce preferred the term “pragmaticism”). For an interesting and popular reading of the development of modern thought and the development of Pragmatism written for the general reader I highly recommend the Pulitzer Prize-winning The Metaphysical Club by Louis Menand.

At the core of this form of empiricism is that the collection of data, that is, recording, observing, and documenting the universe and nature as it is will lead us to an understanding of things that we otherwise would not see. In our more mundane systems, such as business systems and organized efforts applying disciplined project and program management techniques and methods, we also can learn more about these complex adaptive systems through the enhanced collection and translation of data.

I Really Don’t Know Clouds At All: Data, Information, Intelligence, and Knowledge

The term “knowledge discovery in data”, or KDD for short, is an aspirational goal and so, in terms of understanding that goal, is a point of departure from the practice information management and science. I’m taking this stance because the technology industry uses terminology that, as with most language, was originally designed to accurately describe a specific phenomenon or set of methods in order to advance knowledge, only to find that that terminology has been watered down to the point where it obfuscates the issues at hand.

As I traveled to locations across the U.S. over the last three months, I found general agreement among IT professionals who are dealing with the issues of “Big Data”, data integration, and the aforementioned KDD of this state of affairs. In almost every case there is hesitation to use this terminology because it has been absconded and abused by mainstream literature, much as physicists rail against the misuse of the concept of relativity by non-scientific domains.

The impact of this confusion in terminology has caused organizations to make decisions where this terminology is employed to describe a nebulous end-state, without the initiators having an idea of the effort or scope. The danger here, of course, is that for every small innovative company out there, there is also a potential Theranos (probably several). For an in-depth understanding of the psychology and double-speak that has infiltrated our industry I highly recommend the HBO documentary, “The Inventor: Out for Blood in Silicon Valley.”

The reason why semantics are important (as they always have been despite the fact that you may have had an associate complain about “only semantics”) is that they describe the world in front of us. If we cloud the meanings of words and the use of language, it undermines the basis of common understanding and reveals the (poor) quality of our thinking. As Dr. Spiller noted, the paradox of writing and in gathering knowledge is that the more you know, the more you realize you do not know, and the harder writing and communicating knowledge becomes, though we must make the effort nonetheless.

Thus KDD is oftentimes not quite the discovery of knowledge in the sense that the term was intended to mean. It is, instead, a discovery of associations that may lead us to knowledge. Knowing this distinction is important because the corollary processes of data mining, machine learning, and the early application of AI in which we find ourselves is really the process of finding associations, correlations, trends, patterns, and probabilities in data that is approached in a manner as if all information is flat, thereby obliterating its context. This is not knowledge.

We can measure the information content of any set of data, but the real unlocked potential in that information content will come with the processing of it that leads to knowledge. To do that requires an underlying model of domain knowledge, an understanding of the different lexicons in any given set of domains, and a Rosetta Stone that provides a roadmap that identifies those elements of the lexicon that are describing the same things across them. It also requires capturing and preserving context.

For example, when I use the chat on my iPhone it attempts to anticipate what I want to write. I am given three choices of words to choose if I want to use this shortcut. In most cases, the iPhone guesses wrong, despite presenting three choices and having at its disposal (at least presumptively) a larger vocabulary than the writer. Oftentimes it seems to take control, assuming that I have misspelled or misidentified a word and chooses the wrong one for me, where my message becomes a nonsense message.

If one were to believe the hype surrounding AI, one would think that there is magic there but, as Arthur C. Clarke noted (known as Clarke’s Third Law): “Any sufficiently advanced technology is indistinguishable from magic.” Familiar with the new technologies as we are, we know that there is no magic there, and also that it is consistently wrong a good deal of the time. But many individuals come to rely upon the technology nonetheless.

Despite the gloss of something new, the long-established methods of epistemology, code-breaking, statistics, and Calculus apply–as do standards of establishing fact and truth. Despite a large set of data, the iPhone is wrong because the iPhone does not understand–does not possess knowledge–to know why it is wrong. As an aside, its dictionary is also missing a good many words.

A Segue and a Conclusion–I Still Haven’t Found What I’m Looking For: Why Data Integration?…and a Proposed Definition of the Bigness of Data

As with the question to Neil Armstrong, so the question on data. And so the answer is the same. When we look at any set of data under a particular structure of a domain, the information we derive provides us with a manner of looking at the world. In economic systems, businesses, and projects that data provides us with a basis for interpretation, but oftentimes falls short of allowing us to effectively describe and understand what is happening.

Capturing interrelated data across domains allows us to look at the phenomena of these human systems from a different perspective, providing us with the opportunity to derive new knowledge. But in order to do this, we have to be open to this possibility. It also calls for us to, as I have hammered home in this blog, reset our definitions of what is being described.

For example, there are guides in project and program management that refer to statistical measures as “predictive analytics.” This further waters down the intent of the phrase. Measures of earned value are not predictive. They note trends and a single-point outcome. Absent further analysis and processing, the statistical fallacy of extrapolation can be baked into our analysis. The same applies to any index of performance.

Furthermore, these indices and indicators–for that is all they are–do not provide knowledge, which requires a means of not only distinguishing between correlation and causation but also applying contextualization. All systems operate in a vector space. When we measure an economic or social system we are really measuring its behavior in the vector space that it inhabits. This vector space includes the way it is manifested in space-time: the equivalent of length, width, depth (that is, its relative position, significance, and size within information space), and time.

This then provides us with a hint of a definition of what often goes by the definition of “big data.” Originally, as noted in previous blogs, big data was first used in NASA in 1997 by Cox and Ellsworth (not as credited to John Mashey on Wikipedia with the dishonest qualifier “popularized”) and was simply a statement meaning “datasets whose size is beyond the ability of typical database software tools to capture, store, manage, and analyze.”

This is a relative term given Moore’s Law. But we can begin to peel back a real definition of the “bigness” of data. It is important to do this because too many approaches to big data assume it is flat and then apply probabilities and pattern recognition to data that undermines both contextualization and knowledge. Thus…

The Bigness of Data (B) is a function (f ) of the entropy expended (S) to transform data into information, or to extract its information content.

Information evolves. It evolves toward greater complexity just as life evolves toward greater complexity. The universe is built on coded bits of information that, taken together and combined in almost unimaginable ways, provides different forms of life and matter. Our limited ability to decode and understand this information–and our interactions in it– are important to us both individually and collectively.

Much entropy is already expended in the creation of the data that describes the activity being performed. Its context is part of its information content. Obliterating the context inherent in that information content causes all previous entropy to be of no value. Thus, in approaching any set of data, the inherent information content must be taken into account in order to avoid the unnecessary (and erroneous) application of data interpretation.

More to follow in future posts.

Over at AITS.org — The Human Equation in Project Management

Approaches to project management have focused on the systems, procedures, and software put in place to determine progress and likely outcomes. These outcomes are usually expressed in terms of cost, schedule, and technical achievement against the project requirements and framing assumptions—the oft-cited three-legged stool of project management.  What is often missing are measures related to human behavior within the project systems environment.  In this article at AITS.org, I explore this oft ignored dimension.

You Know I’m No Good: 2016 Election Polls and Predictive Analytics

While the excitement and emotions of this past election work themselves out in the populace at large, as a writer and contributor to the use of predictive analytics, I find the discussion about “where the polls went wrong” to be of most interest.  This is an important discussion, because the most reliable polling organizations–those that have proven themselves out by being right consistently on a whole host of issues since most of the world moved to digitization and the Internet of Things in their daily lives–seemed to be dead wrong in certain of their predictions.  I say certain because the polls were not completely wrong.

For partisans who point to Brexit and polling in the U.K., I hasten to add that this is comparing apples to oranges.  The major U.S. polling organizations that use aggregation and Bayesian modeling did not poll Brexit.  In fact, there was one reliable U.K. polling organization that did note two factors:  one was that the trend in the final days was toward Brexit, and the other is that the final result was based on turnout, where greater turnout favored the “stay” vote.

But aside from these general details, this issue is of interest in project management because, unlike national and state polling, where there are sufficient numbers to support significance, at the micro-microeconomic level of project management we deal with very small datasets that expand the range of probable results.  This is not an insignificant point that has been made time-and-time again over the years, particularly given single-point estimates using limited time-phased data absent a general model that provides insight into what are the likeliest results.  This last point is important.

So let’s look at the national polls on the eve of the election according to RealClear.  IBD/TIPP Tracking had it Trump +2 at +/-3.1% in a four way race.  LA Times/USC had it Trump +3 at the 95% confidence interval, which essentially means tied.  Bloomberg had Clinton +3, CBS had Clinton +4, Fox had Clinton +4, Reuters/Ipsos had Clinton +3, ABC/WashPost, Monmouth, Economist/YouGov, Rasmussen, and NBC/SM had Clinton +2 to +6.  The margin for error for almost all of these polls varies from +/-3% to +/-4%.

As of this writing Clinton sits at about +1.8% nationally, the votes are still coming in and continue to confirm her popular vote lead, currently standing at about 300,000 votes.  Of the polls cited, Rasmussen was the closest to the final result.  Virtually every other poll, however, except IBD/TIPP, was within the margin of error.

The polling that was off in predicting the election were those that aggregated polls along with state polls, adjusted polling based on non-direct polling indicators, and/or then projected the chances of winning based on the probable electoral vote totals.  This is where things were off.

Among the most popular of these sites is Nate Silver’s FiveThirtyEight blog.  Silver established his bonafides in 2008 by picking winners with incredible accuracy, particularly at the state level, and subsequently in his work at the New York Times which continued to prove the efficacy of data in predictive analytics in everything from elections to sports.  Since that time his significant reputation has only grown.

What Silver does is determine the probability of an electoral outcome by using poll results that are transparent in their methodologies and that have a high level of confidence.  Silver’s was the most conservative of these types of polling organizations.  On the eve of the election Silver gave Clinton a 71% chance of winning the presidency. The other organizations that use poll aggregation, poll normalization, or other adjusting indicators (such as betting odds, financial market indicators, and political science indicators) include the New York Times Upshot (Clinton 85%), HuffPost (Clinton 98%), PredictWise (Clinton 89%), Princeton (Clinton >99%), DailyKos (Clinton 92%), Cook (Lean Clinton), Roth (Lean Clinton), and Sabato (Lean Clinton).

In order to understand what probability means in this context, the polls were using both bottom-up state polling to track the electoral college combined with national popular vote polling.  But keep in mind that, as Nate Silver wrote over the course of the election, that just a 17% chance of winning “is the same as your chances of losing a “game” of Russian roulette”.  Few of us would take that bet, particularly since the result of losing the game is finality.

Still, except for FiveThirtyEight, none of the other methods using probability got it right.  None, except FiveThirtyEight, left enough room for drawing the wrong chamber.  Also, in fairness, the Cook, Rothenberg, and Sabato projections also left enough room to see a Trump win if the state dominoes fell right.

The place that the models failed were in the states of Florida, North Carolina, Pennsylvania, Michigan, and Wisconsin.  In particular, even with Florida (result Trump +1.3%) and North Carolina (result Trump +3.8%), Trump would not win if Pennsylvania (result Trump +1.2%), Michigan (result Trump +.3), and Wisconsin (result Trump +1.0)–supposed Clinton firewall states–were not breached.  So what happened?

Among the possible factors are the effect of FBI Director Comey’s public intervention, which was too soon to the election to register in the polling; ineffective polling methods in rural areas (garbage in-garbage out), bad state polling quality, voter suppression, purging, and restrictions (of the battleground states this includes Florida, North Carolina, Wisconsin, Ohio, and Iowa), voter turnout and enthusiasm (aside from the factors of voter suppression), and the inability to peg the way the high level of undecided voters would go at the last minute.

In hindsight, the national polls were good predictors.  The sufficiency of the data in drawing significance, and the high level of confidence in their predictive power is borne out by the final national vote totals.

I think that where the polling failed in the projections of the electoral college was from the inability to take into account non-statistical factors, selection bias, and that the state poll models probably did not accurately reflect the electorate in the states given the lessons from the primaries.  Along these lines, I believe that if pollsters look at the demographics in the respective primaries that they will find that both voter enthusiasm and composition provide the corrective to their projections. Given these factors, the aggregators and probabilistic models should all have called the race too close to call.  I think both Monte Carlo and Bayesian methods in simulations will bear this out.

For example, as one who also holds a political science degree and so will put on that hat.  It is a basic tenet that negative campaigns depress voter participation.  This causes voters to select the lesser of two evils (or lesser of two weevils).  Voter participation was down significantly due to a unprecedentedly negative campaign.  When this occurs, the most motivated base will usually determine the winner in an election.  This is why midterm elections are so volatile, particularly after a presidential win that causes a rebound of the opposition party.  Whether this trend continues with the reintroduction of gerrymandering is yet to be seen.

What all this points to from a data analytics perspective is that one must have a model to explain what is happening.  Statistics by themselves, while correct a good bit of the time, will cause one to be overconfident of a result based solely on the numbers and simulations that give the false impression of solidity, particularly when one is in a volatile environment.  This is known as reification.  It is a fallacious way of thinking.  Combined with selection bias and the absence of a reasonable narrative model–one that introduces the social interactions necessary to understand the behavior of complex adaptive systems–one will often find that invalid results result.

Three’s a Crowd — The Nash Equilibrium, Computer Science, and Economics (and what it means for Project Management theory)

Over the last couple of weeks reading picked up on an interesting article via Brad DeLong’s blog, who picked it up from Larry Hardesty at MIT News.  First a little background devoted to defining terms.  The Nash Equilibrium is a part of Game Theory in measuring how and why people make choices in social networks.  As defined in this Columbia University paper:

A game (in strategic or normal form) consists of the following three elements: a set of players, a set of actions (or pure-strategies) available to each player, and a payoff (or utility) function for each player. The payoff functions represent each player’s preferences over action profiles, where an action profile is simply a list of actions, one for each player. A pure-strategy Nash equilibrium is an action profile with the property that no single player can obtain a higher payoff by deviating unilaterally from this profile.

John Von Neumann developed Game Theory to measure, in a mathematical model, the dynamic of conflicts and cooperation between intelligent rational decision-makers in a system.  All social systems can be measured by the application of Game Theory models.  But with all mathematical modeling, there are limitations to what can be determined.  Unlike science, mathematics can only measure and model what we observe, but it can provide insights that would otherwise go unnoticed.  As such, Von Newmann’s work (along with Oskar Morgenstern and Leonid Kantorovich) in this area has become the cornerstone of mathematical economics.

When dealing with two players in a game, a number of models have been developed to explain the behavior that is observed.  For example, most familiar to us are zero-sum games and tit-for-tat games.  Many of us in business, diplomacy, the military profession, and engaging in old-fashioned office politics have come upon such strategies in day-to-day life.  In the article from MIT News that describes the latest work of Constantinos Daskalakis, an assistant professor in MIT’s Computer Science and Artificial Intelligence Laboratory:

In the real world, competitors in a market or drivers on a highway don’t (usually) calculate the Nash equilibria for their particular games and then adopt the resulting strategies. Rather, they tend to calculate the strategies that will maximize their own outcomes given the current state of play. But if one player shifts strategies, the other players will shift strategies in response, which will drive the first player to shift strategies again, and so on. This kind of feedback will eventually converge toward equilibrium:…The argument has some empirical support. Approximations of the Nash equilibrium for two-player poker have been calculated, and professional poker players tend to adhere to them — particularly if they’ve read any of the many books or articles on game theory’s implications for poker.

Anyone who has engaged in two-player games can intuitively understand this insight, from anything from card games to chess.  But in modeling behavior, when a third player is added to the mix, the mathematics in describing market or system behavior becomes “intractable.”  That is, all of the computing power in the world cannot calculate the Nash equilibrium.

Part of this issue is the age-old paradox, put in plain language, that everything that was hard to do for the first time in the past is now easy to do and verify today.  This includes everything from flying aircraft to dealing with quantum physics.  In computing and modeling, the issue is that every hard problem that has to be computed to solved requires far less resources to be verified.  This is known as the problem of P=NP.

We deal with P=NP problems all the time when developing software applications and dealing with ever larger sets of data.  For example, I attended a meeting recently where a major concern among the audience was over the question of scalability, especially in dealing with large sets of data.  In the past “scalability” to the software publisher simply meant the ability of the application to be used over a large set of users via some form of distributed processing (client-server, shared services, desktop virtualization, or a browser-based deployment).  But now with the introduction of KDD (knowledge discovery in databases) scalability now also addresses the ability of technologies to derive importance from the data itself outside of the confines of a hard-coded application.

The search for optimum polynomial algorithms to reduce the speed of time-intensive problems forces the developer to find the solution (the proof of NP-completeness) in advance and then work toward the middle in developing the appropriate algorithm.  This should not be a surprise.  In breaking Enigma during World War II Bletchley Park first identified regularities in the messages that the German high command was sending out.  This then allowed them to work backwards and forwards in calculating how the encryption could be broken.  The same applies to any set of mundane data, regardless of size, which is not trying hard not to be deciphered.  While we may be faced with a Repository of Babel, it is one that badly wants to be understood.

While intuitively the Nash equilibrium does exist, its mathematically intractable character has demanded that new languages and approaches to solving it be developed.  In the case of Daskalakis, he has proposed three routes.  These are:

  1. “One is to say, we know that there exist games that are hard, but maybe most of them are not hard.  In that case you can seek to identify classes of games that are easy, that are tractable.”
  2. Find mathematical models other than Nash equilibria to characterize markets — “models that describe transition states on the way to equilibrium, for example, or other types of equilibria that aren’t so hard to calculate.”
  3. Approximation of the Nash equilibrium, “where the players’ strategies are almost the best responses to their opponents’ strategies — might not be. In those cases, the approximate equilibrium could turn out to describe the behavior of real-world systems.”

This is the basic engineering approach to any complex problem (and a familiar approach to anyone schooled in project management):  break the system down into smaller pieces to solve.

So what does all of this mean for the discipline of project management?  In modeling complex systems behavior for predictive purposes, our approach must correspondingly break down the elements of systems behavior into their constituent parts, but then integrate them in such as way as to derive significance.  The key to this lies in the availability of data and our ability to process it using methods that go beyond trending data for individual variables.

 

 

 

Second Foundation — More on a General Theory of Project Management

In ending my last post on developing a general theory of project management, I introduced the concept of complex adaptive systems (CAS) and posited that projects and their ecosystems fall into this specific category of systems theory.  I also posited that it is through the tools of CAS that we will gain insight into the behavior of projects.  The purpose is not only to identify commonalities in these systems across what is frequently asserted are irreconcilable across economic market verticals, but to identify regularities and the proper math in determining the behavior of these systems.

A brief overview of some of the literature is in order so that we can define our terms, since CAS is a Protean term that has evolved with its application.  Aside from the essential work at the Santa Fe Institute, some of which I linked in my last post on the topic, I would first draw your attention to an overview of CAS by Serena Chan at MIT.  Ms. Chan wrote her paper in 2001, and so her perspective in one important way has proven to be limited, which I will shortly address.  Ms. Chan correctly defines complexity and I will leave it to the reader to go to the link above to read the paper.  The meat of her paper is her definition of CAS by identifying its characteristics.  These are: distributed control, connectivity, co-evolution, sensitive dependence on initial conditions, emergence, distance from equilibrium, and existence in a state of paradox.  She then posits some tools that may be useful in studying the behavior of CAS and then concludes with an odd section on the application of CAS to engineering systems, positing that engineering systems cannot be CAS because they are centrally controlled and hence do not exhibit emergence (non-preprogrammed behavior).  She interestingly uses the example of the internet as her proof.  In the year 2015, I don’t think one can seriously make this claim.  Even in 2001 such an assertion would be specious for it had been ten years since the passage of the High Performance Computing and Communication Act of 1991 (also called the Gore Bill) which commercialized ARPANET.  (Yes, he really did have a major hand in “inventing” the internet as we know it).  It was also eight years from the introduction of Mosaic.  Thus, the internet, as many engineering systems requiring collaboration and human interaction, fall under the rubric of CAS as defined by Ms. Chan.

The independent consultant Peter Fryer at his Trojan Mice blog adds a slightly different spin to identifying CAS.  He asserts that CAS properties are emergence, co-evolution, suboptimal, requisite variety, connectivity, simple rules, iteration, self-organizing, edge of chaos, and nested systems.  My only pique with many of these stated characteristics is that they seem to be slightly overlapping and redundant, splitting hairs without adding to our understanding.  They also tend to be covered by the larger definitions of systems theory and complexity.  Perhaps its worth reducing them within CAS because they provide specific avenues in which to study these types of systems.  We’ll explore this in future posts.

An extremely useful book on CAS is by John H. Miller and Scott E. Page under the rubric of the Princeton Studies in Complexity entitled Complex Adaptive Systems: An Introduction to Computational Models of Social Life.  I strongly recommend it.  In the book Miller and Page explore the concepts of emergence, self-organized criticality, automata, networks, diversity, adaptation, and feedback in CAS.  They also recommend mathematical models to study and assess the behavior of CAS.  In future posts I will address the limitations of mathematics and its inability to contribute to learning, as opposed to providing logical proofs of observed behavior.  Needless to say, this critique will also discuss the further limitations of statistics.

Still, given these stated characteristics, can we state categorically that a project organization is a complex adaptive system?  After, all people attempt to control the environment, there are control systems in place, oftentimes work and organizations are organized according to the expenditure of resources, there is a great deal of planning, and feedback occurs on a regular basis.  Is there really emergence and diversity in this kind of environment?  I think so.  The reason why I think so is because of the one obvious factor that is measures despite the best efforts to exert control, which in reality consists of multiple agents: the presence of risk.  We think we have control of our projects, but in reality we only can exert so much control. Oftentimes we move the goalposts to define success.  This is not necessarily a form of cheating, though sometimes it can be viewed in that context.  The goalposts change because in human CAS we deal with the concept of recursion and its effects.  Risk and recursion are sufficient to land project efforts clearly within the category of CAS.  Furthermore, that projects clearly fall within the definition of CAS follows below.

It is within an extremely useful paper written on CAS from a practical standpoint that was published in 2011 and written by Keith L. Green of the Institute for Defense Analysis (IDA) entitled Complex Adaptive Systems in Military Analysis that we find a clear and comprehensive definition.  In borrowing from A. S. Elgazzar, of both the mathematics departments of El-Arish, Egypt and Al-Jouf King Saud University in the Kingdom of Saudi Arabia; and A. S. Hegazi of the Mathematics Department, Faculty of Science at Mansoura, Egypt–both of whom have contributed a great deal of work on the study of the biological immune systems as a complex adaptive system–Mr. Green states:

A complex adaptive system consists of inhomogeneous, interacting adaptive agents.  Adaptive means capable of learning.  In this instance, the ability to learn does not necessarily imply awareness on the part of the learner; only that the system has memory that affects its behavior in the environment.  In addition to this abstract definition, complex adaptive systems are recognized by their unusual properties, and these properties are part of their signature.  Complex adaptive systems all exhibit non-linear, unpredictable, emergent behavior.  They are self-organizing in that their global structures arise from interactions among their constituent elements, often referred to as agents.  An agent is a discrete entity that behaves in a given manner within its environment.  In most models or analytical treatments, agents are limited to a simple set of rules that guide their responses to the environment.  Agents may also have memory or be capable of transitioning among many possible internal states as a consequence of their previous interactions with other agents and their environment.  The agents of the human brain, or of any brain in fact, are called neurons, for example.  Rather than being centrally controlled, control over the coherent structure is distributed as an emergent property of the interacting agents.  Collectively, the relationships among agents and their current states represent the state of the entire complex adaptive system.

No doubt, this definition can be viewed as having a specific biological bias.  But when applied to the artifacts and structures of more complex biological agents–in our case people–we can clearly see that the tools we use must been broader than those focused on a specific subsystem that possesses the attributes of CAS.  It calls for an interdisciplinary approach that utilizes not only mathematics, statistics, and networks, but also insights from the areas of the physical and computational sciences, economics, evolutionary biology, neuroscience, and psychology.  In understanding the artifacts of human endeavor we must be able to overcome recursion in our observations.  It is relatively easy for an entomologist to understand the structures of ant and termite colonies–and the insights they provide of social insects.  It has been harder, particularly in economics and sociology, for the scientific method to be applied in a similarly detached and rigorous method.  One need only look to the perverse examples of Spencer’s Social Statics and Murray and Herrnstein’s The Bell Curve as but two examples where selection bias, ideology, class bias, and racism have colored such attempts regarding more significant issues.

It is my intent to avoid bias by focusing on the specific workings of what we call project systems.  My next posts on the topic will focus on each of the signatures of CAS and the elements of project systems that fall within them.

Talking (Project Systems) Blues: A Foundation for a General Theory

As with those of you who observe the upcoming Thanksgiving holiday, I find myself suddenly in a state of non-motion and, as a result, with feet firmly on the ground, able to write a post.  This is preface to pointing out that the last couple of weeks have been both busy and productive in a positive way.

Among the events of the last two weeks was the meeting of project management professionals focused on the discipline of aerospace and defense at the Integrated Program Management Workshop.  This vertical, unlike other areas of project management, is characterized by applying a highly structured approach that involves a great deal of standardization.  Most often, people involved in this area tend to engage in an area where the public sector plays a strong role in defining the environment in which the market operates.  Furthermore, the major suppliers tend to be limited, and so both oligopolistic and monopolistic market competition defines the market space.

Within this larger framework, however, is a set of mid-level and small firms engaged in intense competition to provide both supplies and services to the limited set of large suppliers.  As such, they operate within the general framework of the larger environment defined by public sector procedures, laws, and systems, but within those constraints act with a great deal of freedom, especially in acting as a conduit to commercial and innovative developments from the private sector.

Furthermore, since many technologies originate within the public sector (as in the internet, microchips, etc. among other examples since the middle of the 20th century), the layer of major suppliers, and mid-level to small businesses also act as a conduit to introducing such technologies to the larger private sector.  Thus, the relationship is a mutually reinforcing one.

Given the nature of this vertical and its various actors, I’ve come upon the common refrain that it is unique in its characteristics and, as such, acts as a poor analogue of other project management systems.  Dave Gordon, for example, who is a well-respected expert in IT projects in commenting on previous posts, has expressed some skepticism in my suggestion that there may be commonalities across the project management discipline regardless of vertical or end-item development.  I have promised a response and a dialogue and, given recent discussions, I think I have a path forward.

I would argue, instead, that the nature of the aerospace and defense (A&D) vertical provides a perfect control for determining the strength of commonalities.  My contention is that because larger and less structured economic verticals do not have the same ability to control the market environment and mechanisms that they provide barriers to identifying possible commonalities due to their largely chaotic condition.  Thus, unlike in other social sciences, we are not left with real time experimentation absent a control group.  Both non-A&D and A&D verticals provide the basis to provide controls for the other, given enough precision in identifying the characteristics being identified and measured.

But we need a basis, a framework for identifying commonalities.  As such our answers will be found in systems theory.  This is not a unique or new observation, but for the basis of outlining our structure it is useful to state the basis of the approach.  For those of you playing along at home, the seminal works in this area are Norbert Weiner’s Cybernetics or, Control and Communication in the Animal and the Machine (1948), and Ludwig von Bertalanffy’s General Systems Theory (1968).

But we must go beyond basic systems theory in its formative stage.  Project are a particular type of system, a complex system.  Even beyond that they must go one more step, because they are human systems that both individually in its parts and in aggregate displays learning.  As such these are complex adaptive systems or CAS.  They exist in a deterministic universe, as all CAS do, but are non-deterministic within the general boundaries of that larger physical world.

The main thought leaders of CAS are John H. Holland, as in this 1992 paper in Daedalus, and Murray Gell-Mann with his work at the Santa Fe Institute.  The literature is extensive and this is just the start, including taking into account the work of Kristo Ivanov from the concepts coming out of his work, Hypersystems: A Base for Specification of Computer-Supported Self-Learning Social Systems.

It is upon this basis, especially in the manner in which the behavior that CAS can be traced and predicted, where will be able to establish the foundation of a general theory of project management systems.  I’ll be vetting ideas over the coming weeks regarding this approach, with some suggestions on real world applicability and methodologies across project domains.

Over at AITS.org — Red Queen Race: Why Fast Tracking a Project is Not in Your Control

“Well, in our country,” said Alice, still panting a little, “you’d generally get to somewhere else—if you run very fast for a long time, as we’ve been doing.”

“A slow sort of country!” said the Queen. “Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast, as that!”Through the Looking-Glass and What Alice Found There, Chapter 2, Lewis Carroll

There have been a number of high profile examples in the news over the last two years concerning project management.  For example, the initial rollout of the Affordable Care Act marketplace web portal was one of these, and the causes for its faults are still being discussed. As I write this, an article in the New York Times indicates that the fast track efforts to create an Ebola vaccine are faltering…

To read the remainder of this post please to go to this link.

I Get By With A Little Help… — Avoiding NIH in Project Management

…from my colleagues, friends, family, associates, advisors, mentors, subcontractors, consultants, employees.  And not necessarily in that order.

The term NIH in this context is not referring to the federal agency.  It is shorthand, instead, for “Not Invented Here”.  I was reminded of this particular mindset when driving through an old neighborhood where I served as a community organizer.  At one of the meetings of a local board, which was particularly dysfunctional (and where I was attempting to reform their dysfunction), a member remarked:  “I am tired of hearing about how this or that particular issue was handled somewhere else.”  Yes, I thought, why would we possibly want to know how Portland, or D.C., or Boston, or Denver, or Phoenix–or any of the number of other places faced with the same issue–effectively or ineffectively dealt with it before us?  What could they possibly teach us?

When we deal with a project management organization, we are dealing with a learning system.  Hopefully an effectively adaptive system.  The qualifier here is important.  The danger with any tightknit group is to fall into the dual traps of Groupthink and NIH.  The first deals with the behavior relating to conformity within small groups based on the observations and study by William H. Whyte and his successors.  The second is the mindset that the issues faced by the group are unique to it; and so the use of models, tools, experience, and proven statistical and leading indicators do not apply.

A project management organization (or any complex human organization) is one that adapts to pressures from its environment.  It is one with the ability to learn, since it is made up of entities with the ability to create and utilize intelligence and information, and so it is unique from biological systems that adapt over time through sexual and natural selection.  Here is also an important point:  while biological evolution occurs over long spans of time, we don’t see the dead ends and failures of adaptation until the story is written–at least, not outside of those who work in the microbiological field where evolution of viruses and bacteria occur rapidly.  So for large animals and major species it appears to be a Panglossian world, which it definitely is not.

When we take Panglossian thinking into the analogies that we find in social and other complex adaptive systems, the fallacies in our thinking can be disastrous and cause great unnecessary suffering.  I am reminded here of the misuse of the concept of self-organization in complex systems and of the term “market” in economics.  Organizations and social structures can “self-organize” not only into equilibrium but also into spirals of failure and death.  Extremely large and complex organizations like nation-states and societies are replete with such examples: from Revolutionary France to Czarist Russia, to recent examples in Africa and the Near East.  In economics, “the market” determines price.  The inability of the market to self-regulate–and the nature of self-organization–resulted in the bursting of the housing bubble in the first decade of this century, precipitating a financial crisis.  This is the most immediate example of a systemic death spiral of global proportions, which was finally resolved (finally) only with a great deal of intervention by rational actors.

So when I state: hopefully an effective adaptive system, I mean one that does not adapt itself by small steps into unnecessary failure or wasted effort.  (As our business, financial, economic, and political leaders did in the early 2000s).  Along these same lines, when we speak of fitness in surviving challenges (or fitness in biological evolution), we do not imply the “best” of something.  Fitness is simply a convenient term to describe the survivors after all of the bodies have been counted.  In nature one can survive due to plain luck, through capabilities or characteristics of inheritance fit to the environmental space, through favorable chance isolation or local conditions–the list is extensive.  Many of these same factors also apply to social and complex adaptive systems, but on a shorter timescale with a higher degree of traceable proximate cause-and-effect, depending on the size and scale of the organization.

In project management systems, while it is important to establish the closed-loop systems necessary to gain feedback from the environment to determine whether the organization is effectively navigating itself to achieve its goals against a plan, it is also necessary to have those systems in place that allow for leveraging both organizational and competency knowledge, as well as third-party solutions.  That is, broadening the base in applying intelligence.

This includes not only education, training, mentoring, and the retention and use of a diversified mix of experienced knowledge workers, but also borrowing solutions outside of the organization.  It means being open to all of the tools available in avoiding NIH.  Chances are, though the time, place, and local circumstances may be different, someone has faced something very similar before somewhere else.  With the availability of information becoming so ubiquitous, there is very little excuse for restricting our sources.

But given this new situation, our systems must now possess the ability to apply qualitative selection criteria in identifying analogous information, tempered with judgment in identifying the differences in the situations where they exist.  But given that most systems–including systems of failure–organize themselves into types and circumstances that can be generalized into assumptions, we should be able to leverage both the differences and similarities in developing a shortcut that doesn’t require all of the previous steps to be repeated (with a high degree of repeating failure).

In closing, I think it important to note that failure here is defined as the inability of the organization to come up with an effective solution to the problem at hand, where one is possible.  I am not referring to failure as the inability to achieve an intermediate goal.  In engineering and other fields of learning, including business, failure is oftentimes a necessary part of the process, especially when pushing technologies in which previous examples and experience cannot apply to the result.  The lessons learned from a failed test in this situation, for example, can be extremely valuable.  But a failed test that resulted from the unwillingness of the individuals in the group to consider similar experience or results due to NIH is inexcusable.

Just Dropped In (To See What System My System Was In)

There are all kinds of systems.  The American economy is a type of system, the natural ecology of an area is a type of system, the weather is a system, and the organization that we call a project or program is a type of system.  Given our penchant as a species for classification there are, among this list, some major groupings that can be discerned.

First among these is whether the system we are observing is a generally a natural one.  Social scientists in general, and economists and philosophers in particular, make this error all the time and the reasons for it become clear when we discuss them.  Ecological systems are natural systems absent human interaction.  The weather as I have described it above is a natural system, though weather is influenced by the heat effect of cities, among other identified phenomena.  The stock market is not a natural system.  The behavior of individuals in the socio-economic system is not a natural system.  There may be certain behaviors among humans that are common or predictive given particular stresses, rewards, and stimuli and these are part of a natural system.  But behavior in a particular social system is separate from an accurate description of human behavior in general.

Second, all of the systems I have named are either simple or complex.  What we mean by complexity is determined by observation in answering a “how” question.  In this case the amount of complexity is determined by the amount of information necessary to describe the system.

For example, inference that leads to algorithmic complexity is one way to identify complexity.  The scale that is being described also determines what we mean by complexity in any particular case.  Thus, when I listed “weather” above, what exactly did I mean?  Hurricanes (very important to understand in Florida), tornadoes (equally important for most of the United States), dust devils, rain patterns, or some other phenomenon of weather?

As a career sailor the controlling equations and models to predict wave height and frequency was linear wave theory.  Thus, a seemingly complex system actually could be explained by a model that simplified the individual processes involved.  These equations work quite well in most situations, especially when wave frequency and height is influenced by water depth and is wind driven.

But then there are exceptions and I experienced one personally.  In 1984 my shipmates and I encountered a giant wave in the middle of the Pacific Ocean.   At one time the U.S. Navy and other ocean going services would discount these reports as simply exaggerated sailor’s stories.  Certainly for my shipmates and I the 60% roll, the damaged equipment, and the twisted keel were evidence of more than exaggeration.  One of the larger ships in our task force suffered twisted decks and had to head back to port.  The widespread use of satellite observation has shown that these waves occur all the time.  How do we account for such phenomena?

The answer is non-linear equations and an understanding of the systems in play.  When we talk of “waves,” we are really being non-specific in determining if they are complex or simple systems.  Language oftentimes fails us.  For example, how do we account for shallow water wave patterns and the formation of barrier islands, inlets, estuaries, etc.?  Complex systems are involved and require an understanding of several independent actors to explain the pattern that results.  The same applies to the occasional appearance of a giant wave.  A tsunami, a different kind of giant wave, such as that was caused by the 2004 Indian Ocean and the 2011 Japanese Tohoku events, is largely predictable up until it reaches the shoreline.  Then special equations must be used to determine the height, force, and amount of land inundation.

Thus the number of and interdependence of the parts in influencing the system then come into play in determining complexity.  Certainly central limit theorem influences this determination.  This number needs to be sufficiently high to overcome central control that can be explained by a single, overriding and controlling event or actor.  This is why the equation for a tsunami is simple and the tsunami itself is not a complex system until it reaches the shoreline, but its affect on the shoreline and the predictive behavior of the wave on land then falls into the category of a complex system.  In our project, if there are few people, limited outside influences, and one single, controlling force or personality, such as a manager or corporate chain of command, then the test of complexity fails and we have a simple system that can be monitored since the complexity of the system is limited by the complexity of the controlling factor.

So ecological systems are complex, weather systems are complex, the socioeconomic system is complex, but only some projects are complex.

Finally, does the system evolve?  That is, does individual and collective behavior mutate and adapt based on some small event or collection of events?  For social systems this would include not only crowd behavior or individual instinctual responses, though they are also important, but also an indication of learning.

As part of a social system, complex adaptive systems include what we call projects (or any human organization) but not all projects are complex adaptive systems.  Some are clearly so, particularly in aerospace and defense, and space projects.  The conclusions in terms of the types of assessment systems and modeling that we apply become clearer given this insight.  For public policy, earned value management as a method of determining progress may be sufficient for simple projects.  More complex projects require other, more integrated, methods and models.