In ending my last post on developing a general theory of project management, I introduced the concept of complex adaptive systems (CAS) and posited that projects and their ecosystems fall into this specific category of systems theory. I also posited that it is through the tools of CAS that we will gain insight into the behavior of projects. The purpose is not only to identify commonalities in these systems across what is frequently asserted are irreconcilable across economic market verticals, but to identify regularities and the proper math in determining the behavior of these systems.
A brief overview of some of the literature is in order so that we can define our terms, since CAS is a Protean term that has evolved with its application. Aside from the essential work at the Santa Fe Institute, some of which I linked in my last post on the topic, I would first draw your attention to an overview of CAS by Serena Chan at MIT. Ms. Chan wrote her paper in 2001, and so her perspective in one important way has proven to be limited, which I will shortly address. Ms. Chan correctly defines complexity and I will leave it to the reader to go to the link above to read the paper. The meat of her paper is her definition of CAS by identifying its characteristics. These are: distributed control, connectivity, co-evolution, sensitive dependence on initial conditions, emergence, distance from equilibrium, and existence in a state of paradox. She then posits some tools that may be useful in studying the behavior of CAS and then concludes with an odd section on the application of CAS to engineering systems, positing that engineering systems cannot be CAS because they are centrally controlled and hence do not exhibit emergence (non-preprogrammed behavior). She interestingly uses the example of the internet as her proof. In the year 2015, I don’t think one can seriously make this claim. Even in 2001 such an assertion would be specious for it had been ten years since the passage of the High Performance Computing and Communication Act of 1991 (also called the Gore Bill) which commercialized ARPANET. (Yes, he really did have a major hand in “inventing” the internet as we know it). It was also eight years from the introduction of Mosaic. Thus, the internet, as many engineering systems requiring collaboration and human interaction, fall under the rubric of CAS as defined by Ms. Chan.
The independent consultant Peter Fryer at his Trojan Mice blog adds a slightly different spin to identifying CAS. He asserts that CAS properties are emergence, co-evolution, suboptimal, requisite variety, connectivity, simple rules, iteration, self-organizing, edge of chaos, and nested systems. My only pique with many of these stated characteristics is that they seem to be slightly overlapping and redundant, splitting hairs without adding to our understanding. They also tend to be covered by the larger definitions of systems theory and complexity. Perhaps its worth reducing them within CAS because they provide specific avenues in which to study these types of systems. We’ll explore this in future posts.
An extremely useful book on CAS is by John H. Miller and Scott E. Page under the rubric of the Princeton Studies in Complexity entitled Complex Adaptive Systems: An Introduction to Computational Models of Social Life. I strongly recommend it. In the book Miller and Page explore the concepts of emergence, self-organized criticality, automata, networks, diversity, adaptation, and feedback in CAS. They also recommend mathematical models to study and assess the behavior of CAS. In future posts I will address the limitations of mathematics and its inability to contribute to learning, as opposed to providing logical proofs of observed behavior. Needless to say, this critique will also discuss the further limitations of statistics.
Still, given these stated characteristics, can we state categorically that a project organization is a complex adaptive system? After, all people attempt to control the environment, there are control systems in place, oftentimes work and organizations are organized according to the expenditure of resources, there is a great deal of planning, and feedback occurs on a regular basis. Is there really emergence and diversity in this kind of environment? I think so. The reason why I think so is because of the one obvious factor that is measures despite the best efforts to exert control, which in reality consists of multiple agents: the presence of risk. We think we have control of our projects, but in reality we only can exert so much control. Oftentimes we move the goalposts to define success. This is not necessarily a form of cheating, though sometimes it can be viewed in that context. The goalposts change because in human CAS we deal with the concept of recursion and its effects. Risk and recursion are sufficient to land project efforts clearly within the category of CAS. Furthermore, that projects clearly fall within the definition of CAS follows below.
It is within an extremely useful paper written on CAS from a practical standpoint that was published in 2011 and written by Keith L. Green of the Institute for Defense Analysis (IDA) entitled Complex Adaptive Systems in Military Analysis that we find a clear and comprehensive definition. In borrowing from A. S. Elgazzar, of both the mathematics departments of El-Arish, Egypt and Al-Jouf King Saud University in the Kingdom of Saudi Arabia; and A. S. Hegazi of the Mathematics Department, Faculty of Science at Mansoura, Egypt–both of whom have contributed a great deal of work on the study of the biological immune systems as a complex adaptive system–Mr. Green states:
A complex adaptive system consists of inhomogeneous, interacting adaptive agents. Adaptive means capable of learning. In this instance, the ability to learn does not necessarily imply awareness on the part of the learner; only that the system has memory that affects its behavior in the environment. In addition to this abstract definition, complex adaptive systems are recognized by their unusual properties, and these properties are part of their signature. Complex adaptive systems all exhibit non-linear, unpredictable, emergent behavior. They are self-organizing in that their global structures arise from interactions among their constituent elements, often referred to as agents. An agent is a discrete entity that behaves in a given manner within its environment. In most models or analytical treatments, agents are limited to a simple set of rules that guide their responses to the environment. Agents may also have memory or be capable of transitioning among many possible internal states as a consequence of their previous interactions with other agents and their environment. The agents of the human brain, or of any brain in fact, are called neurons, for example. Rather than being centrally controlled, control over the coherent structure is distributed as an emergent property of the interacting agents. Collectively, the relationships among agents and their current states represent the state of the entire complex adaptive system.
No doubt, this definition can be viewed as having a specific biological bias. But when applied to the artifacts and structures of more complex biological agents–in our case people–we can clearly see that the tools we use must been broader than those focused on a specific subsystem that possesses the attributes of CAS. It calls for an interdisciplinary approach that utilizes not only mathematics, statistics, and networks, but also insights from the areas of the physical and computational sciences, economics, evolutionary biology, neuroscience, and psychology. In understanding the artifacts of human endeavor we must be able to overcome recursion in our observations. It is relatively easy for an entomologist to understand the structures of ant and termite colonies–and the insights they provide of social insects. It has been harder, particularly in economics and sociology, for the scientific method to be applied in a similarly detached and rigorous method. One need only look to the perverse examples of Spencer’s Social Statics and Murray and Herrnstein’s The Bell Curve as but two examples where selection bias, ideology, class bias, and racism have colored such attempts regarding more significant issues.
It is my intent to avoid bias by focusing on the specific workings of what we call project systems. My next posts on the topic will focus on each of the signatures of CAS and the elements of project systems that fall within them.