figure 2: three other two-player games.
chicken (0, 0) (-
dare ( 1, -
In the chicken game above, there are two Nash equilibria, in which one
player chooses "chicken," and the other player "dare.” There is also a mixed
equilibrium, in which each player makes a random choice that equalizes
the expected payoffs to the opponent, of either of the opponent's actions.
cooperate (0, 0) (-
defect ( 1, -
In the prisoner's dilemma game above, there is just one Nash equilibrium,
in which both players defect. This is despite the fact that each player does
better when they both cooperate.
kick left kick right
dive left ( 1, -
dive right (-
1, 1) ( 1, -
In the penalty shot game above, there is just one Nash equilibrium which
is mixed, and in which both the goalkeeper and the penalty kicker choose
left or right at random.
subsequent refinements and competing equilibrium concepts were judged.
Universality is a desirable attribute for an equilibrium
concept. Of course, such a concept must also be natural and
credible as a prediction of behavior by a group of agents—
for example, pure Nash seems preferable to mixed Nash, in
games that do have a pure Nash equilibrium. But there is a
third important desideratum on equilibrium concepts, of a
computational nature: An equilibrium concept should be efficiently computable if it is to be taken seriously as a prediction
of what a group of agents will do. Because, if computing a
particular kind of equilibrium is an intractable problem, of
the kind that take lifetimes of the universe to solve on the
world’s fastest computers, it is ludicrous to expect that it can
be arrived at in real life. This consideration suggests the following important question: Is there an efficient algorithm for
computing a mixed Nash equilibrium? In this article, we report
on results that indicate that the answer is negative—our own
5, 7, 8, 13 obtained this for games with three or more players, and shortly afterwards, the papers2, 3 extended this—
unexpectedly—to the important case of two-player games.
Ever since Nash’s paper was published in 1950, many
researchers have sought algorithms for finding mixed Nash
equilibria—that is, for solving the computational problem
which we will call Nash in this paper. If a game is zero-sum,
like the rock–paper–scissors game, then it follows from the
work of John von Neumann in the 1920s that Nash can be
formulated in terms of linear programming (a subject identified by George Dantzig in the 1940s); linear programs can
be solved efficiently (even though we only realized this in the
1970s). But what about games that are not zero-sum? Several
algorithms have been proposed over the past half century,
90 CommunICatIons of the aCm | febrUary 2009 | vol. 52 | no. 2
but all of them are either of unknown complexity, or known
to require, in the worst case, exponential time.
During the same decades that these concepts were being
explored by game theorists, Computer Science theorists
were busy developing, independently, a theory of algorithms and complexity addressing precisely the kind of
problem raised in the last two paragraphs: Given a computational problem, can it be solved by an efficient algorithm?
For many common computational tasks (such as finding a
solution of a set of linear equations) there is a polynomial-time algorithm that solves them—this class of problems is
called P. For other such problems, such as finding a truth
assignment that satisfies a set of Boolean clauses (a problem known as sat), or the traveling salesman problem, no
such algorithm could be found after many attempts. Many
of these problems can be proved nP-complete, meaning
they cannot be solved efficiently unless P = nP—an event
considered very unlikely.
From the previous discussion of failed attempts to
develop an efficient algorithm for Nash, one might be
tempted to suppose that this problem too is nP-complete.
But the situation is not that simple. Nash is unlike any
nP-complete problem because, by Nash’s theorem, it is guaranteed to always have a solution. In contrast, nP-complete
problems like sat draw their intractability from the possibility that a solution might not exist—this possibility is used
heavily in the nP-completeness proof.b See Figure 3 for an
argument (due to Nimrod Megiddo) why it is very unlikely
that nP-completeness can characterize the complexity of
Nash. (Note however that if one seeks a Nash equilibrium
with additional properties—such as the one that maximizes
the sum of player utilities, or one that uses a given strategy
with positive probability—then the problem does become
Since nP-completeness is not an option, to understand
the complexity of Nash one must essentially start all over in
figure 3: megiddo’s proof that nASH is unlikely to be nP-complete.
Suppose we have a reduction from SAT to NASH, that is, an efficient algorithm that takes as input an instance of SA T and outputs an instance of NASH,
so that any solution to the instance of NASH tells us whether or not the SAT
instance has a solution. Then we could turn this into a nondeterministic
algorithm for verifying that an instance of SAT has no solution: Just guess a
solution of the NASH instance, and check that it indeed implies that the SAT
instance has no solution.
The existence of such a nondeterministic algorithm for SAT (one that can
verify that an unsatisfiable formula is indeed unsatisfiable) is an eventuality
that is considered by complexity theorists almost as unlikely as P = NP.
We conclude that NASH is very unlikely to be NP-complete.
“But what about the traveling salesman problem?” one might ask.
“Does not it always have a solution?” To compare fairly the traveling salesman problem with sat and Nash, one has to first transform it into a search
problem of the form “Given a distance matrix and a budget B, find a tour that
is cheaper than B, or report that none exists”. Notice that an instance of this
problem may or may not have a solution. But, an efficient algorithm for this
problem could be used to find an optimal tour.