the Complexity of Computing
By Ehud Kalai
Computer science and game theory
go back to the same individual, John
von Neumann, and both subjects deal
with the mathematization of rational
decision making. Yet, for many years
they continued to work apart. Computer science concentrated mostly
on issues of complexity; game theory
mostly on issues of incentives.
But in the last dozen years we have
seen a fusion of the two fields. Methodologies of each are used to solve
problems in the other, and the concepts of each are incorporated to create better and more robust models in
Nash equilibrium, introduced in
the 1950s, is the main concept used
in the analysis of strategic games. The
equilibrium is simple to describe, logically appealing, powerful in making
predictions, and its existence was brilliantly demonstrated by John Nash.
Despite its importance, research
in the possibility of computing Nash
equilibrium for games with many
strategies only began in the late 1980s
with studies by Gilboa and Zemel.
This question was taken on full force
by Papadimitriou and coauthors in a
series of breakthrough papers. The
research presented here is central to
this literature, and (together with a
follow-up paper by Chen and Deng
for the case n= 2) it offers a complete
negative result. Computing a Nash
equilibrium of an arbitrary n-person
non-cooperative game with many individual strategies is at least as difficult as any problem that belongs to
the class of PPAD-complete problems
believed, and is considered too difficult for practical computations.
The message to users of Nash equilibrium may be devastating, as they
ask: “If my computer cannot compute
it, how can players in the market do
it?” The negative results the authors
report here even apply to algorithms
where players can be centrally coordinated, and in the situations being
modeled, the problem is actually even
more intractable, since a solution
must be found by a dispersed, uncoordinated set of players.
But as is often the case with impossibility results, this one leads to a large
variety of follow-up questions dealing
with the assumptions, the methodology, and the relevance.
The standard worst-case approach
employed by the authors here assumes that for every proposed algorithm, one should consider the most
difficult n-person game to compute,
and study the computation time as
the number of individual strategies
becomes arbitrarily large.
Contrary to this approach, it is argued that a game designed to defeat
an algorithm is not likely to be natural
in applications. As a result, we have
seen alternative approaches and conclusions to the question of computing a Nash equilibrium in games with
First, motivated by economic, political, and other applications, there
have been studies of computations of
Nash equilibrium for restricted classes of games: games with anonymous
opponents, games with graphical
structures, games with continuous
and/or convex payoff functions, and
more. As it turns out, in many of these
games researchers are able to establish positive results.
There are positive results on computing—with a high probability—
an equilibrium of a many-strategies
randomly generated game. This approach, however, must assume a
probability distribution by which
games are generated, and the results
may depend on the distribution.
Finally, an old established approach
is to ignore the asymptotic nature of
the question, and simply find algorithms that work well in practice (
similar to the simplex algorithm for linear
programming working well, despite its
asymptotic algorithmic inefficiency).
Recent research leads to algorithms
that are efficient in computing equilibria of rich classes of test games.
Ehud Kalai is the James J. o’connor Professor
of decision and game sciences and a professor
of Mathematics in the kellogg school of Management
at northwestern University, evanston, il he is also
the director for the center for game theory and
88 CommunICatIons of the aCm | feBRuaRY 2009 | vol. 52 | No. 2