Over the last couple of weeks reading picked up on an interesting article via Brad DeLong’s blog, who picked it up from Larry Hardesty at MIT News. First a little background devoted to defining terms. The Nash Equilibrium is a part of Game Theory in measuring how and why people make choices in social networks. As defined in this Columbia University paper:

A game (in strategic or normal form) consists of the following three elements: a set of players, a set of actions (or pure-strategies) available to each player, and a payoff (or utility) function for each player. The payoff functions represent each player’s preferences over action profiles, where an action profile is simply a list of actions, one for each player. A *pure-strategy Nash equilibrium *is an action profile with the property that no single player can obtain a higher payoff by deviating unilaterally from this profile.

John Von Neumann developed Game Theory to measure, in a mathematical model, the dynamic of conflicts and cooperation between intelligent rational decision-makers in a system. All social systems can be measured by the application of Game Theory models. But with all mathematical modeling, there are limitations to what can be determined. Unlike science, mathematics can only measure and model what we observe, but it can provide insights that would otherwise go unnoticed. As such, Von Newmann’s work (along with Oskar Morgenstern and Leonid Kantorovich) in this area has become the cornerstone of mathematical economics.

When dealing with two players in a game, a number of models have been developed to explain the behavior that is observed. For example, most familiar to us are zero-sum games and tit-for-tat games. Many of us in business, diplomacy, the military profession, and engaging in old-fashioned office politics have come upon such strategies in day-to-day life. In the article from MIT News that describes the latest work of Constantinos Daskalakis, an assistant professor in MIT’s Computer Science and Artificial Intelligence Laboratory:

In the real world, competitors in a market or drivers on a highway don’t (usually) calculate the Nash equilibria for their particular games and then adopt the resulting strategies. Rather, they tend to calculate the strategies that will maximize their own outcomes given the current state of play. But if one player shifts strategies, the other players will shift strategies in response, which will drive the first player to shift strategies again, and so on. This kind of feedback will eventually converge toward equilibrium:…The argument has some empirical support. Approximations of the Nash equilibrium for two-player poker have been calculated, and professional poker players tend to adhere to them — particularly if they’ve read any of the many books or articles on game theory’s implications for poker.

Anyone who has engaged in two-player games can intuitively understand this insight, from anything from card games to chess. But in modeling behavior, when a third player is added to the mix, the mathematics in describing market or system behavior becomes “intractable.” That is, all of the computing power in the world cannot calculate the Nash equilibrium.

Part of this issue is the age-old paradox, put in plain language, that everything that was hard to do for the first time in the past is now easy to do and verify today. This includes everything from flying aircraft to dealing with quantum physics. In computing and modeling, the issue is that every hard problem that has to be computed to solved requires far less resources to be verified. This is known as the problem of P=NP.

We deal with P=NP problems all the time when developing software applications and dealing with ever larger sets of data. For example, I attended a meeting recently where a major concern among the audience was over the question of scalability, especially in dealing with large sets of data. In the past “scalability” to the software publisher simply meant the ability of the application to be used over a large set of users via some form of distributed processing (client-server, shared services, desktop virtualization, or a browser-based deployment). But now with the introduction of KDD (knowledge discovery in databases) scalability now also addresses the ability of technologies to derive importance from the data itself outside of the confines of a hard-coded application.

The search for optimum polynomial algorithms to reduce the speed of time-intensive problems forces the developer to find the solution (the proof of NP-completeness) in advance and then work toward the middle in developing the appropriate algorithm. This should not be a surprise. In breaking Enigma during World War II Bletchley Park first identified regularities in the messages that the German high command was sending out. This then allowed them to work backwards and forwards in calculating how the encryption could be broken. The same applies to any set of mundane data, regardless of size, which is not trying hard not to be deciphered. While we may be faced with a Repository of Babel, it is one that badly wants to be understood.

While intuitively the Nash equilibrium does exist, its mathematically intractable character has demanded that new languages and approaches to solving it be developed. In the case of Daskalakis, he has proposed three routes. These are:

- “One is to say, we know that there exist games that are hard, but maybe most of them are not hard. In that case you can seek to identify classes of games that are easy, that are tractable.”
- Find mathematical models other than Nash equilibria to characterize markets — “models that describe transition states on the way to equilibrium, for example, or other types of equilibria that aren’t so hard to calculate.”
- Approximation of the Nash equilibrium, “where the players’ strategies are almost the best responses to their opponents’ strategies — might not be. In those cases, the approximate equilibrium could turn out to describe the behavior of real-world systems.”

This is the basic engineering approach to any complex problem (and a familiar approach to anyone schooled in project management): break the system down into smaller pieces to solve.

So what does all of this mean for the discipline of project management? In modeling complex systems behavior for predictive purposes, our approach must correspondingly break down the elements of systems behavior into their constituent parts, but then integrate them in such as way as to derive significance. The key to this lies in the availability of data and our ability to process it using methods that go beyond trending data for individual variables.