have “strategies,” which have minimal
structure, and motivations, which are
encapsulated in a simple real-valued
utility function. (This in fact carries
even less information than is suggested
by the use of numbers, as the theory is
unchanged by any positive affine transformation of the numbers.) In real life,
and in computer programs attempting
to behave intelligently, we find use for a
much broader vocabulary. For example,
agents are able to take certain actions
and not others; have desires, goals, and
intentions (the belief-desire-intention
combination giving rise to the pun
“beady-eye agent architecture”); and
make plans. Apparently these abstract
notions are useful both in effecting
intelligent behavior and in reasoning
about it. Philosophers have written
about them (for example, Bratman1)
and there have been attempts—albeit
preliminary ones—to formalize these
intuitions (starting with Cohen and
Levesque3). Some in AI have advocated
embracing an even broader vocabulary
of emotions (such as the recent provocative albeit informal book by Minsky. 23)
Is game theory missing out by not considering these concepts?
it may already lead to a certain extent
beyond the obvious and the familiar.
Here theory and application corroborate each other mutually. Beyond this
lies the field of real success: genuine
predictions by theory. It is well known
that all mathematized sciences have
gone through these successive phases
of evolution. 40
So at least von Neumann, the father
of modern-day game theory and computer science, attached importance to
spanning the spectrum from the theoretical to the applied. Pragmatics may
be critical to achieving von Neumann
and Morgenstern’s third stage, and it
could propel a joint endeavor between
computer science and game theory.
Acknowledgments
I thank Alon Altman, Joe Halpern, Sam Ieong, Daphne
Koller, Tim Roughgarden, Moshe Vardi, Mike Wellman, and
anonymous referees for their useful help and suggestions.
Of course, all errors, opinions, and erroneous opinions are
mine alone.
concluding Remarks
Science operates at many levels. For
some, it is sufficient that scientific theories be clever, beautiful, and inspirational. Others require that any science
eventually make contact with compelling
applications and be subjected to empirical evaluation. Without personally weighing in on this emotional debate, I note
that in his 2004 presidential address at
the Second World Congress of the Game
Theory Society, 17 Kalai reprised the three
stages of any science as discussed by von
Neumann and Morgenstern:
[W]hat is important is the gradual
development of a theory, based on a
careful analysis of the ordinary everyday
interpretation of economic facts. The
theory finally developed must be mathematically rigorous and conceptually
general. Its first applications are necessarily to elementary problems where the
result has never been in doubt and no
theory is actually required. At this early
stage the application serves to corroborate the theory. The next stage develops
when the theory is applied to somewhat
more complicated situations in which
References
1. Bratman, M.E. Intention, Plans, and Practical Reason.
CSLI Publications, Stanford University, 1987.
2. Chen, X. and Deng, X. Settling the complexity of
2-player Nash–equilibrium. FOCS, 2006.
3. Cohen, P.R. and Levesque, H.J. Intention is choice with
commitment. Artificial Intelligence 42, 2–3 (1990),
213–261.
4. Conitzer, V. and Sandholm, T. Complexity results about
Nash equilibria. IJCAI. 2003, 761–771.
5. Conitzer, V. and Sandholm, T. Computing Shapley
values, manipulating value division schemes, and
checking core membership in multi–issue domains.
AAAI, 2004.
6. Cramton, P. C., Shoham, Y. and Steinberg, R. (eds).
Combinatorial Auctions. MIT Press, 2006.
7. Deng, X and Papadimitriou, C. H. On the complexity
of cooperative solution concepts. Mathematics of
Operations Research 19, 257, 1994.
8. Fagin, R., Halpern, J. Y., Moses, Y. and Vardi, M. Y.
Reasoning about Knowledge. MI T Press, Cambridge,
MA, 1995.
9. Niemela, I., Brewka, G., and Truszczynski, M.
Nonmonotonic reasoning. In Handbook of Knowledge
Representation. F. van Harmelen, V. Lifschitz, and B.
Porter. (eds.), Elsevier, 2007.
10. Gilboa, I. and Zemel, E. Nash and correlated equilibria:
Some complexity considerations. Games and
Economic Behavior, 80–93, 1989.
11. Govindan, S. and Wilson, R. A global Newton method
to compute Nash equilibria. Journal of Economic
Theory, 2003.
12. Greenwald, A. and Littman, M. L. (eds.). Special issue
on learning and computational game theory. Machine
Learning 67, 1–2, 2007.
13. Halpern, J. Y. A computer scientist looks at game
theory. Games and Economic Behavior 45, 1 (2003),
114–132.
14. Halpern, J. Y. Computer science and game theory:
A brief survey. In The new Palgrave Dictionary
of Economics. S.N. Durlauf and L.E. Blume (eds.),
Palgrave MacMillan, 2008.
15. Ieong, S. and Shoham, Y. Marginal Contribution Nets:
A compact representation scheme for coalitional
games. In Proceedings of the 6th ACM Conference on
Electronic Commerce, 2005.
16. Kalai, E. Games, computers, and O.R. In ACM/SIAM
Symposium on Discrete Algorithms, 1995.
17. Kalai, E. Presidential address. The Second World
Congress of the Game Theory Society,
Marseille, 2004. Games and Economic Behavior,
P. Reny (ed.), in press.
18. Kearns, M. J., Littman, M. L., and Singh, S.P. Graphical
models for game theory. UAI, 2001.
19. Koller, D. and Milch, B. Multiagent influence diagrams
for representing and solving games. IJCAI, 2001.
20. LaMura, P. Game networks. UAI, 2000.
21. Leyton–Brown, K. and Tennenholtz, M. Local–effect
games. IJCAI, 2003.
22. Linial, N. Game theoretic aspects of computing. In
Handbook of Game Theory, R. J. Aumann and S. Hart
(eds.), 1339–1395, Elsevier Science, 1994.
23. Minsky, M. The Emotion Machine: Commonsense
Thinking, Artificial Intelligence, and the Future of the
Human Mind, New York: Simon and Shuster, 2007.
24. Neyman, A. Bounded complexity justifies cooperation
in finitely repeated prisoner’s dilemma. Economic
Letters, 1985, 227–229.
25. Nisan, N., Roughgarden, T., Tardos, E., and Vazirani, V.
(eds). Algorithmic Game Theory. Cambridge University
Press, 2007.
26. Nisan, N. and Ronen, A. Algorithmic mechanism
design. In Proceedings of the 31st ACM Symposium on
Theory of Computing, 1999, 129–140.
27. Papadimitriou, C.H. On the complexity of the parity
argument and other inefficient proofs of existence.
Journal of Computer and System Sciences 48, 3
(1004), 498–532.
28. Papadimitriou, C.H. and Yannakakis, M. On bounded
rationality and computational complexity. In
Proceedings of the Symposium on the Theory of
Computing, 1994, 726–733.
29. Papadimitriou, C.H. Algorithms, games, and the
Internet. Lecture notes in Computer Science 2076,
2001.
30. Pappas, P. Belief revision. In Handbook of Knowledge
Representation. F. van Harmelen, V. Lifschitz, and B.
Porter (eds.). Elsevier, 2007.
31. Porter, R., Nudelman, E., and Shoham, Y. Simple
search methods for finding a Nash equilibrium. In
Proceedings of the national Conference on Artificial
Intelligence, 2004, 664–669.
32. Roughgarden, T. Computing equilibria: A
computational complexity perspective. Economic
Theory, 2008.
33. Rubinstein, A. Modeling Bounded Rationality. MIT
Press, Cambridge, MA, 1998.
34. Savage, L. J. The Foundations of Statistics. John Wiley
and Sons, NY, 1954. (Second Edition: Dover Press,
1972).
35. Shoham, Y. and Leyton–Brown, K. Multiagent Systems:
Algorithmic, Game Theoretic, and Logical Foundations.
Cambridge University Press, 2008.
36. Tennenholtz, M. Program equilibrium. Games and
Economic Behavior 49, 2004, 363–373.
37. van Benthem, J. Exploring Logical Dynamics. Center
for the Study of Language and Information, Stanford,
CA, 1997.
38. van Benthem, J. When are two games the same? In
LOF T–III, 1998 (ILLC preprint, 1999).
39. Vohra, R. and Wellman, M. P. (eds.). Special issue
on foundations of multiagent learning. Artificial
Intelligence 171, 1 (2007).
40. von Neumann, J. and Morgenstern, O. Theory of
Games and Economic Behavior, Second Edition.
Princeton University Press, 1947.
41. von Stengel, B. Computing equilibria for two–person
games. In Handbook of Game Theory, vol. III,
chap. 45. R. Aumann and S. Hart (eds.). Elsevier,
Amsterdam, 2002, 1723–1759.
42. Wellman, M.P., Greenwald, A., and Stone, P.
Autonomous Bidding Agents: Strategies and Lessons
from the Trading Agent Competition. MI T Press, 2007.
43. Zinkevich, M., Bowling, M., and Burch. N. A new
algorithm for generating equilibria in massive zero–
sum games. In Proceedings of the 22nd Conference on
AI, 2007, 788–793.
This work has been supported by NSF grant TR–0205633.
Yoav Shoham ( http://cs.stanford.edu/~shoham) is a
professor of computer science at Stanford University,
Stanford, CA.