Over at AITS.org — The Human Equation in Project Management

Approaches to project management have focused on the systems, procedures, and software put in place to determine progress and likely outcomes. These outcomes are usually expressed in terms of cost, schedule, and technical achievement against the project requirements and framing assumptions—the oft-cited three-legged stool of project management.  What is often missing are measures related to human behavior within the project systems environment.  In this article at AITS.org, I explore this oft ignored dimension.

You Know I’m No Good: 2016 Election Polls and Predictive Analytics

While the excitement and emotions of this past election work themselves out in the populace at large, as a writer and contributor to the use of predictive analytics, I find the discussion about “where the polls went wrong” to be of most interest.  This is an important discussion, because the most reliable polling organizations–those that have proven themselves out by being right consistently on a whole host of issues since most of the world moved to digitization and the Internet of Things in their daily lives–seemed to be dead wrong in certain of their predictions.  I say certain because the polls were not completely wrong.

For partisans who point to Brexit and polling in the U.K., I hasten to add that this is comparing apples to oranges.  The major U.S. polling organizations that use aggregation and Bayesian modeling did not poll Brexit.  In fact, there was one reliable U.K. polling organization that did note two factors:  one was that the trend in the final days was toward Brexit, and the other is that the final result was based on turnout, where greater turnout favored the “stay” vote.

But aside from these general details, this issue is of interest in project management because, unlike national and state polling, where there are sufficient numbers to support significance, at the micro-microeconomic level of project management we deal with very small datasets that expand the range of probable results.  This is not an insignificant point that has been made time-and-time again over the years, particularly given single-point estimates using limited time-phased data absent a general model that provides insight into what are the likeliest results.  This last point is important.

So let’s look at the national polls on the eve of the election according to RealClear.  IBD/TIPP Tracking had it Trump +2 at +/-3.1% in a four way race.  LA Times/USC had it Trump +3 at the 95% confidence interval, which essentially means tied.  Bloomberg had Clinton +3, CBS had Clinton +4, Fox had Clinton +4, Reuters/Ipsos had Clinton +3, ABC/WashPost, Monmouth, Economist/YouGov, Rasmussen, and NBC/SM had Clinton +2 to +6.  The margin for error for almost all of these polls varies from +/-3% to +/-4%.

As of this writing Clinton sits at about +1.8% nationally, the votes are still coming in and continue to confirm her popular vote lead, currently standing at about 300,000 votes.  Of the polls cited, Rasmussen was the closest to the final result.  Virtually every other poll, however, except IBD/TIPP, was within the margin of error.

The polling that was off in predicting the election were those that aggregated polls along with state polls, adjusted polling based on non-direct polling indicators, and/or then projected the chances of winning based on the probable electoral vote totals.  This is where things were off.

Among the most popular of these sites is Nate Silver’s FiveThirtyEight blog.  Silver established his bonafides in 2008 by picking winners with incredible accuracy, particularly at the state level, and subsequently in his work at the New York Times which continued to prove the efficacy of data in predictive analytics in everything from elections to sports.  Since that time his significant reputation has only grown.

What Silver does is determine the probability of an electoral outcome by using poll results that are transparent in their methodologies and that have a high level of confidence.  Silver’s was the most conservative of these types of polling organizations.  On the eve of the election Silver gave Clinton a 71% chance of winning the presidency. The other organizations that use poll aggregation, poll normalization, or other adjusting indicators (such as betting odds, financial market indicators, and political science indicators) include the New York Times Upshot (Clinton 85%), HuffPost (Clinton 98%), PredictWise (Clinton 89%), Princeton (Clinton >99%), DailyKos (Clinton 92%), Cook (Lean Clinton), Roth (Lean Clinton), and Sabato (Lean Clinton).

In order to understand what probability means in this context, the polls were using both bottom-up state polling to track the electoral college combined with national popular vote polling.  But keep in mind that, as Nate Silver wrote over the course of the election, that just a 17% chance of winning “is the same as your chances of losing a “game” of Russian roulette”.  Few of us would take that bet, particularly since the result of losing the game is finality.

Still, except for FiveThirtyEight, none of the other methods using probability got it right.  None, except FiveThirtyEight, left enough room for drawing the wrong chamber.  Also, in fairness, the Cook, Rothenberg, and Sabato projections also left enough room to see a Trump win if the state dominoes fell right.

The place that the models failed were in the states of Florida, North Carolina, Pennsylvania, Michigan, and Wisconsin.  In particular, even with Florida (result Trump +1.3%) and North Carolina (result Trump +3.8%), Trump would not win if Pennsylvania (result Trump +1.2%), Michigan (result Trump +.3), and Wisconsin (result Trump +1.0)–supposed Clinton firewall states–were not breached.  So what happened?

Among the possible factors are the effect of FBI Director Comey’s public intervention, which was too soon to the election to register in the polling; ineffective polling methods in rural areas (garbage in-garbage out), bad state polling quality, voter suppression, purging, and restrictions (of the battleground states this includes Florida, North Carolina, Wisconsin, Ohio, and Iowa), voter turnout and enthusiasm (aside from the factors of voter suppression), and the inability to peg the way the high level of undecided voters would go at the last minute.

In hindsight, the national polls were good predictors.  The sufficiency of the data in drawing significance, and the high level of confidence in their predictive power is borne out by the final national vote totals.

I think that where the polling failed in the projections of the electoral college was from the inability to take into account non-statistical factors, selection bias, and that the state poll models probably did not accurately reflect the electorate in the states given the lessons from the primaries.  Along these lines, I believe that if pollsters look at the demographics in the respective primaries that they will find that both voter enthusiasm and composition provide the corrective to their projections. Given these factors, the aggregators and probabilistic models should all have called the race too close to call.  I think both Monte Carlo and Bayesian methods in simulations will bear this out.

For example, as one who also holds a political science degree and so will put on that hat.  It is a basic tenet that negative campaigns depress voter participation.  This causes voters to select the lesser of two evils (or lesser of two weevils).  Voter participation was down significantly due to a unprecedentedly negative campaign.  When this occurs, the most motivated base will usually determine the winner in an election.  This is why midterm elections are so volatile, particularly after a presidential win that causes a rebound of the opposition party.  Whether this trend continues with the reintroduction of gerrymandering is yet to be seen.

What all this points to from a data analytics perspective is that one must have a model to explain what is happening.  Statistics by themselves, while correct a good bit of the time, will cause one to be overconfident of a result based solely on the numbers and simulations that give the false impression of solidity, particularly when one is in a volatile environment.  This is known as reification.  It is a fallacious way of thinking.  Combined with selection bias and the absence of a reasonable narrative model–one that introduces the social interactions necessary to understand the behavior of complex adaptive systems–one will often find that invalid results result.

Three’s a Crowd — The Nash Equilibrium, Computer Science, and Economics (and what it means for Project Management theory)

Over the last couple of weeks reading picked up on an interesting article via Brad DeLong’s blog, who picked it up from Larry Hardesty at MIT News.  First a little background devoted to defining terms.  The Nash Equilibrium is a part of Game Theory in measuring how and why people make choices in social networks.  As defined in this Columbia University paper:

A game (in strategic or normal form) consists of the following three elements: a set of players, a set of actions (or pure-strategies) available to each player, and a payoff (or utility) function for each player. The payoff functions represent each player’s preferences over action profiles, where an action profile is simply a list of actions, one for each player. A pure-strategy Nash equilibrium is an action profile with the property that no single player can obtain a higher payoff by deviating unilaterally from this profile.

John Von Neumann developed Game Theory to measure, in a mathematical model, the dynamic of conflicts and cooperation between intelligent rational decision-makers in a system.  All social systems can be measured by the application of Game Theory models.  But with all mathematical modeling, there are limitations to what can be determined.  Unlike science, mathematics can only measure and model what we observe, but it can provide insights that would otherwise go unnoticed.  As such, Von Newmann’s work (along with Oskar Morgenstern and Leonid Kantorovich) in this area has become the cornerstone of mathematical economics.

When dealing with two players in a game, a number of models have been developed to explain the behavior that is observed.  For example, most familiar to us are zero-sum games and tit-for-tat games.  Many of us in business, diplomacy, the military profession, and engaging in old-fashioned office politics have come upon such strategies in day-to-day life.  In the article from MIT News that describes the latest work of Constantinos Daskalakis, an assistant professor in MIT’s Computer Science and Artificial Intelligence Laboratory:

In the real world, competitors in a market or drivers on a highway don’t (usually) calculate the Nash equilibria for their particular games and then adopt the resulting strategies. Rather, they tend to calculate the strategies that will maximize their own outcomes given the current state of play. But if one player shifts strategies, the other players will shift strategies in response, which will drive the first player to shift strategies again, and so on. This kind of feedback will eventually converge toward equilibrium:…The argument has some empirical support. Approximations of the Nash equilibrium for two-player poker have been calculated, and professional poker players tend to adhere to them — particularly if they’ve read any of the many books or articles on game theory’s implications for poker.

Anyone who has engaged in two-player games can intuitively understand this insight, from anything from card games to chess.  But in modeling behavior, when a third player is added to the mix, the mathematics in describing market or system behavior becomes “intractable.”  That is, all of the computing power in the world cannot calculate the Nash equilibrium.

Part of this issue is the age-old paradox, put in plain language, that everything that was hard to do for the first time in the past is now easy to do and verify today.  This includes everything from flying aircraft to dealing with quantum physics.  In computing and modeling, the issue is that every hard problem that has to be computed to solved requires far less resources to be verified.  This is known as the problem of P=NP.

We deal with P=NP problems all the time when developing software applications and dealing with ever larger sets of data.  For example, I attended a meeting recently where a major concern among the audience was over the question of scalability, especially in dealing with large sets of data.  In the past “scalability” to the software publisher simply meant the ability of the application to be used over a large set of users via some form of distributed processing (client-server, shared services, desktop virtualization, or a browser-based deployment).  But now with the introduction of KDD (knowledge discovery in databases) scalability now also addresses the ability of technologies to derive importance from the data itself outside of the confines of a hard-coded application.

The search for optimum polynomial algorithms to reduce the speed of time-intensive problems forces the developer to find the solution (the proof of NP-completeness) in advance and then work toward the middle in developing the appropriate algorithm.  This should not be a surprise.  In breaking Enigma during World War II Bletchley Park first identified regularities in the messages that the German high command was sending out.  This then allowed them to work backwards and forwards in calculating how the encryption could be broken.  The same applies to any set of mundane data, regardless of size, which is not trying hard not to be deciphered.  While we may be faced with a Repository of Babel, it is one that badly wants to be understood.

While intuitively the Nash equilibrium does exist, its mathematically intractable character has demanded that new languages and approaches to solving it be developed.  In the case of Daskalakis, he has proposed three routes.  These are:

  1. “One is to say, we know that there exist games that are hard, but maybe most of them are not hard.  In that case you can seek to identify classes of games that are easy, that are tractable.”
  2. Find mathematical models other than Nash equilibria to characterize markets — “models that describe transition states on the way to equilibrium, for example, or other types of equilibria that aren’t so hard to calculate.”
  3. Approximation of the Nash equilibrium, “where the players’ strategies are almost the best responses to their opponents’ strategies — might not be. In those cases, the approximate equilibrium could turn out to describe the behavior of real-world systems.”

This is the basic engineering approach to any complex problem (and a familiar approach to anyone schooled in project management):  break the system down into smaller pieces to solve.

So what does all of this mean for the discipline of project management?  In modeling complex systems behavior for predictive purposes, our approach must correspondingly break down the elements of systems behavior into their constituent parts, but then integrate them in such as way as to derive significance.  The key to this lies in the availability of data and our ability to process it using methods that go beyond trending data for individual variables.

 

 

 

Second Foundation — More on a General Theory of Project Management

In ending my last post on developing a general theory of project management, I introduced the concept of complex adaptive systems (CAS) and posited that projects and their ecosystems fall into this specific category of systems theory.  I also posited that it is through the tools of CAS that we will gain insight into the behavior of projects.  The purpose is not only to identify commonalities in these systems across what is frequently asserted are irreconcilable across economic market verticals, but to identify regularities and the proper math in determining the behavior of these systems.

A brief overview of some of the literature is in order so that we can define our terms, since CAS is a Protean term that has evolved with its application.  Aside from the essential work at the Santa Fe Institute, some of which I linked in my last post on the topic, I would first draw your attention to an overview of CAS by Serena Chan at MIT.  Ms. Chan wrote her paper in 2001, and so her perspective in one important way has proven to be limited, which I will shortly address.  Ms. Chan correctly defines complexity and I will leave it to the reader to go to the link above to read the paper.  The meat of her paper is her definition of CAS by identifying its characteristics.  These are: distributed control, connectivity, co-evolution, sensitive dependence on initial conditions, emergence, distance from equilibrium, and existence in a state of paradox.  She then posits some tools that may be useful in studying the behavior of CAS and then concludes with an odd section on the application of CAS to engineering systems, positing that engineering systems cannot be CAS because they are centrally controlled and hence do not exhibit emergence (non-preprogrammed behavior).  She interestingly uses the example of the internet as her proof.  In the year 2015, I don’t think one can seriously make this claim.  Even in 2001 such an assertion would be specious for it had been ten years since the passage of the High Performance Computing and Communication Act of 1991 (also called the Gore Bill) which commercialized ARPANET.  (Yes, he really did have a major hand in “inventing” the internet as we know it).  It was also eight years from the introduction of Mosaic.  Thus, the internet, as many engineering systems requiring collaboration and human interaction, fall under the rubric of CAS as defined by Ms. Chan.

The independent consultant Peter Fryer at his Trojan Mice blog adds a slightly different spin to identifying CAS.  He asserts that CAS properties are emergence, co-evolution, suboptimal, requisite variety, connectivity, simple rules, iteration, self-organizing, edge of chaos, and nested systems.  My only pique with many of these stated characteristics is that they seem to be slightly overlapping and redundant, splitting hairs without adding to our understanding.  They also tend to be covered by the larger definitions of systems theory and complexity.  Perhaps its worth reducing them within CAS because they provide specific avenues in which to study these types of systems.  We’ll explore this in future posts.

An extremely useful book on CAS is by John H. Miller and Scott E. Page under the rubric of the Princeton Studies in Complexity entitled Complex Adaptive Systems: An Introduction to Computational Models of Social Life.  I strongly recommend it.  In the book Miller and Page explore the concepts of emergence, self-organized criticality, automata, networks, diversity, adaptation, and feedback in CAS.  They also recommend mathematical models to study and assess the behavior of CAS.  In future posts I will address the limitations of mathematics and its inability to contribute to learning, as opposed to providing logical proofs of observed behavior.  Needless to say, this critique will also discuss the further limitations of statistics.

Still, given these stated characteristics, can we state categorically that a project organization is a complex adaptive system?  After, all people attempt to control the environment, there are control systems in place, oftentimes work and organizations are organized according to the expenditure of resources, there is a great deal of planning, and feedback occurs on a regular basis.  Is there really emergence and diversity in this kind of environment?  I think so.  The reason why I think so is because of the one obvious factor that is measures despite the best efforts to exert control, which in reality consists of multiple agents: the presence of risk.  We think we have control of our projects, but in reality we only can exert so much control. Oftentimes we move the goalposts to define success.  This is not necessarily a form of cheating, though sometimes it can be viewed in that context.  The goalposts change because in human CAS we deal with the concept of recursion and its effects.  Risk and recursion are sufficient to land project efforts clearly within the category of CAS.  Furthermore, that projects clearly fall within the definition of CAS follows below.

It is within an extremely useful paper written on CAS from a practical standpoint that was published in 2011 and written by Keith L. Green of the Institute for Defense Analysis (IDA) entitled Complex Adaptive Systems in Military Analysis that we find a clear and comprehensive definition.  In borrowing from A. S. Elgazzar, of both the mathematics departments of El-Arish, Egypt and Al-Jouf King Saud University in the Kingdom of Saudi Arabia; and A. S. Hegazi of the Mathematics Department, Faculty of Science at Mansoura, Egypt–both of whom have contributed a great deal of work on the study of the biological immune systems as a complex adaptive system–Mr. Green states:

A complex adaptive system consists of inhomogeneous, interacting adaptive agents.  Adaptive means capable of learning.  In this instance, the ability to learn does not necessarily imply awareness on the part of the learner; only that the system has memory that affects its behavior in the environment.  In addition to this abstract definition, complex adaptive systems are recognized by their unusual properties, and these properties are part of their signature.  Complex adaptive systems all exhibit non-linear, unpredictable, emergent behavior.  They are self-organizing in that their global structures arise from interactions among their constituent elements, often referred to as agents.  An agent is a discrete entity that behaves in a given manner within its environment.  In most models or analytical treatments, agents are limited to a simple set of rules that guide their responses to the environment.  Agents may also have memory or be capable of transitioning among many possible internal states as a consequence of their previous interactions with other agents and their environment.  The agents of the human brain, or of any brain in fact, are called neurons, for example.  Rather than being centrally controlled, control over the coherent structure is distributed as an emergent property of the interacting agents.  Collectively, the relationships among agents and their current states represent the state of the entire complex adaptive system.

No doubt, this definition can be viewed as having a specific biological bias.  But when applied to the artifacts and structures of more complex biological agents–in our case people–we can clearly see that the tools we use must been broader than those focused on a specific subsystem that possesses the attributes of CAS.  It calls for an interdisciplinary approach that utilizes not only mathematics, statistics, and networks, but also insights from the areas of the physical and computational sciences, economics, evolutionary biology, neuroscience, and psychology.  In understanding the artifacts of human endeavor we must be able to overcome recursion in our observations.  It is relatively easy for an entomologist to understand the structures of ant and termite colonies–and the insights they provide of social insects.  It has been harder, particularly in economics and sociology, for the scientific method to be applied in a similarly detached and rigorous method.  One need only look to the perverse examples of Spencer’s Social Statics and Murray and Herrnstein’s The Bell Curve as but two examples where selection bias, ideology, class bias, and racism have colored such attempts regarding more significant issues.

It is my intent to avoid bias by focusing on the specific workings of what we call project systems.  My next posts on the topic will focus on each of the signatures of CAS and the elements of project systems that fall within them.

Talking (Project Systems) Blues: A Foundation for a General Theory

As with those of you who observe the upcoming Thanksgiving holiday, I find myself suddenly in a state of non-motion and, as a result, with feet firmly on the ground, able to write a post.  This is preface to pointing out that the last couple of weeks have been both busy and productive in a positive way.

Among the events of the last two weeks was the meeting of project management professionals focused on the discipline of aerospace and defense at the Integrated Program Management Workshop.  This vertical, unlike other areas of project management, is characterized by applying a highly structured approach that involves a great deal of standardization.  Most often, people involved in this area tend to engage in an area where the public sector plays a strong role in defining the environment in which the market operates.  Furthermore, the major suppliers tend to be limited, and so both oligopolistic and monopolistic market competition defines the market space.

Within this larger framework, however, is a set of mid-level and small firms engaged in intense competition to provide both supplies and services to the limited set of large suppliers.  As such, they operate within the general framework of the larger environment defined by public sector procedures, laws, and systems, but within those constraints act with a great deal of freedom, especially in acting as a conduit to commercial and innovative developments from the private sector.

Furthermore, since many technologies originate within the public sector (as in the internet, microchips, etc. among other examples since the middle of the 20th century), the layer of major suppliers, and mid-level to small businesses also act as a conduit to introducing such technologies to the larger private sector.  Thus, the relationship is a mutually reinforcing one.

Given the nature of this vertical and its various actors, I’ve come upon the common refrain that it is unique in its characteristics and, as such, acts as a poor analogue of other project management systems.  Dave Gordon, for example, who is a well-respected expert in IT projects in commenting on previous posts, has expressed some skepticism in my suggestion that there may be commonalities across the project management discipline regardless of vertical or end-item development.  I have promised a response and a dialogue and, given recent discussions, I think I have a path forward.

I would argue, instead, that the nature of the aerospace and defense (A&D) vertical provides a perfect control for determining the strength of commonalities.  My contention is that because larger and less structured economic verticals do not have the same ability to control the market environment and mechanisms that they provide barriers to identifying possible commonalities due to their largely chaotic condition.  Thus, unlike in other social sciences, we are not left with real time experimentation absent a control group.  Both non-A&D and A&D verticals provide the basis to provide controls for the other, given enough precision in identifying the characteristics being identified and measured.

But we need a basis, a framework for identifying commonalities.  As such our answers will be found in systems theory.  This is not a unique or new observation, but for the basis of outlining our structure it is useful to state the basis of the approach.  For those of you playing along at home, the seminal works in this area are Norbert Weiner’s Cybernetics or, Control and Communication in the Animal and the Machine (1948), and Ludwig von Bertalanffy’s General Systems Theory (1968).

But we must go beyond basic systems theory in its formative stage.  Project are a particular type of system, a complex system.  Even beyond that they must go one more step, because they are human systems that both individually in its parts and in aggregate displays learning.  As such these are complex adaptive systems or CAS.  They exist in a deterministic universe, as all CAS do, but are non-deterministic within the general boundaries of that larger physical world.

The main thought leaders of CAS are John H. Holland, as in this 1992 paper in Daedalus, and Murray Gell-Mann with his work at the Santa Fe Institute.  The literature is extensive and this is just the start, including taking into account the work of Kristo Ivanov from the concepts coming out of his work, Hypersystems: A Base for Specification of Computer-Supported Self-Learning Social Systems.

It is upon this basis, especially in the manner in which the behavior that CAS can be traced and predicted, where will be able to establish the foundation of a general theory of project management systems.  I’ll be vetting ideas over the coming weeks regarding this approach, with some suggestions on real world applicability and methodologies across project domains.

Over at AITS.org — Red Queen Race: Why Fast Tracking a Project is Not in Your Control

“Well, in our country,” said Alice, still panting a little, “you’d generally get to somewhere else—if you run very fast for a long time, as we’ve been doing.”

“A slow sort of country!” said the Queen. “Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast, as that!”Through the Looking-Glass and What Alice Found There, Chapter 2, Lewis Carroll

There have been a number of high profile examples in the news over the last two years concerning project management.  For example, the initial rollout of the Affordable Care Act marketplace web portal was one of these, and the causes for its faults are still being discussed. As I write this, an article in the New York Times indicates that the fast track efforts to create an Ebola vaccine are faltering…

To read the remainder of this post please to go to this link.

I Get By With A Little Help… — Avoiding NIH in Project Management

…from my colleagues, friends, family, associates, advisors, mentors, subcontractors, consultants, employees.  And not necessarily in that order.

The term NIH in this context is not referring to the federal agency.  It is shorthand, instead, for “Not Invented Here”.  I was reminded of this particular mindset when driving through an old neighborhood where I served as a community organizer.  At one of the meetings of a local board, which was particularly dysfunctional (and where I was attempting to reform their dysfunction), a member remarked:  “I am tired of hearing about how this or that particular issue was handled somewhere else.”  Yes, I thought, why would we possibly want to know how Portland, or D.C., or Boston, or Denver, or Phoenix–or any of the number of other places faced with the same issue–effectively or ineffectively dealt with it before us?  What could they possibly teach us?

When we deal with a project management organization, we are dealing with a learning system.  Hopefully an effectively adaptive system.  The qualifier here is important.  The danger with any tightknit group is to fall into the dual traps of Groupthink and NIH.  The first deals with the behavior relating to conformity within small groups based on the observations and study by William H. Whyte and his successors.  The second is the mindset that the issues faced by the group are unique to it; and so the use of models, tools, experience, and proven statistical and leading indicators do not apply.

A project management organization (or any complex human organization) is one that adapts to pressures from its environment.  It is one with the ability to learn, since it is made up of entities with the ability to create and utilize intelligence and information, and so it is unique from biological systems that adapt over time through sexual and natural selection.  Here is also an important point:  while biological evolution occurs over long spans of time, we don’t see the dead ends and failures of adaptation until the story is written–at least, not outside of those who work in the microbiological field where evolution of viruses and bacteria occur rapidly.  So for large animals and major species it appears to be a Panglossian world, which it definitely is not.

When we take Panglossian thinking into the analogies that we find in social and other complex adaptive systems, the fallacies in our thinking can be disastrous and cause great unnecessary suffering.  I am reminded here of the misuse of the concept of self-organization in complex systems and of the term “market” in economics.  Organizations and social structures can “self-organize” not only into equilibrium but also into spirals of failure and death.  Extremely large and complex organizations like nation-states and societies are replete with such examples: from Revolutionary France to Czarist Russia, to recent examples in Africa and the Near East.  In economics, “the market” determines price.  The inability of the market to self-regulate–and the nature of self-organization–resulted in the bursting of the housing bubble in the first decade of this century, precipitating a financial crisis.  This is the most immediate example of a systemic death spiral of global proportions, which was finally resolved (finally) only with a great deal of intervention by rational actors.

So when I state: hopefully an effective adaptive system, I mean one that does not adapt itself by small steps into unnecessary failure or wasted effort.  (As our business, financial, economic, and political leaders did in the early 2000s).  Along these same lines, when we speak of fitness in surviving challenges (or fitness in biological evolution), we do not imply the “best” of something.  Fitness is simply a convenient term to describe the survivors after all of the bodies have been counted.  In nature one can survive due to plain luck, through capabilities or characteristics of inheritance fit to the environmental space, through favorable chance isolation or local conditions–the list is extensive.  Many of these same factors also apply to social and complex adaptive systems, but on a shorter timescale with a higher degree of traceable proximate cause-and-effect, depending on the size and scale of the organization.

In project management systems, while it is important to establish the closed-loop systems necessary to gain feedback from the environment to determine whether the organization is effectively navigating itself to achieve its goals against a plan, it is also necessary to have those systems in place that allow for leveraging both organizational and competency knowledge, as well as third-party solutions.  That is, broadening the base in applying intelligence.

This includes not only education, training, mentoring, and the retention and use of a diversified mix of experienced knowledge workers, but also borrowing solutions outside of the organization.  It means being open to all of the tools available in avoiding NIH.  Chances are, though the time, place, and local circumstances may be different, someone has faced something very similar before somewhere else.  With the availability of information becoming so ubiquitous, there is very little excuse for restricting our sources.

But given this new situation, our systems must now possess the ability to apply qualitative selection criteria in identifying analogous information, tempered with judgment in identifying the differences in the situations where they exist.  But given that most systems–including systems of failure–organize themselves into types and circumstances that can be generalized into assumptions, we should be able to leverage both the differences and similarities in developing a shortcut that doesn’t require all of the previous steps to be repeated (with a high degree of repeating failure).

In closing, I think it important to note that failure here is defined as the inability of the organization to come up with an effective solution to the problem at hand, where one is possible.  I am not referring to failure as the inability to achieve an intermediate goal.  In engineering and other fields of learning, including business, failure is oftentimes a necessary part of the process, especially when pushing technologies in which previous examples and experience cannot apply to the result.  The lessons learned from a failed test in this situation, for example, can be extremely valuable.  But a failed test that resulted from the unwillingness of the individuals in the group to consider similar experience or results due to NIH is inexcusable.