also cooperates (top-left cell, second
number) versus $35 if he defects (
top-right cell, second number).
Similarly if Cain believes Abel will
defect (i.e., choose the bottom row),
then Cain gets $15 by cooperating
versus $20 by defecting. Thus no
matter what Abel does, Cain is better
off defecting! Similarly, Abel prefers
to defect no matter what Cain does,
and so rational players both defect.
This is what’s known as a Nash
equilibrium: it’s a pair of strategies,
one for each player, such that neither
player can improve his or her payoff
by doing something else. So for our
game, it is an equilibrium for Cain
and Abel to both defect, and hence
that’s the behavior we expect to see.
(In fact, it’s also a dominant strategy
equilibrium, i.e., each player’s
strategy is a best-response to all
strategies of the opponent, not just
the Nash equilibrium strategy of the
If you doubt this outcome, try to get
two friends to agree to the following
experiment: have the first friend (the
one with deeper pockets; we’ll call
him/her the banker) give you and the
second friend two envelopes and two
$20 dollar bills. You and the second
friend will each take an envelope and
a $20 dollar bill and go to separate
rooms. There you will each either put
the $20 bill in the envelope or keep it for
yourself. At night, the banker should
collect the envelopes, take out the
money, increase the total by 50 percent,
and then redistribute the money
evenly among the two envelopes. Thus
if both you and your friend put $20
in the envelope (this corresponds to
“cooperating” in the game discussed
above), the banker will collect $40 at
night, increase this amount to $60,
and then you’ll each have $30 in your
envelopes in the morning.
On the other hand, if neither of you
puts money in the envelope (which
corresponds to ‘defecting’), you’ll each
end up with just $20 in the morning.
If only one of you puts money in
the envelope, say your friend, then
the banker will collect $20 at night,
increase it to $30, and put $15 in each
envelope. Thus your friend wakes up
with only $15 whereas you get $35!
Similar to the game here, no matter
“By casting the
world into a simple
framework, we are
able to illuminate key
aspects of society. We
learn that evolving
so—for a smoothly
what your friend does, you get the most
money by keeping it all to begin with.
If you’re rational, you will defect. (This
game works well when you play it with
someone you don’t like all that much,
or better, if you don’t know him/her
at all and expect never to see him/her
again; otherwise you may rationally
cooperate expecting him/her to spend
his/her money buying you coffee.)
We have admittedly abstracted
our setting quite far from the original
story of bank robbery, but, and this is
the art of modeling, the essence of the
issue is still there: it is very tempting to
rob a bank or, more generally, cheat in
life. And this issue is very important.
It appears throughout our lives,
from personal dilemmas to political
quandaries. In fact, a general form
of this game, called the Prisoner’s
Dilemma, was introduced during the
Cold War to study arms races between
the United States and the Soviet Union,
and has been extensively studied
ever since by economists, social
scientists, and politicians. And the
results are by-and-large upheld: small
perturbations of the model do not
significantly affect our dire prediction.
Rational people should rob banks.
CONFLICTS BET WEEN THE MODEL
Rational people don’t rob banks. This
conflict between theory and practice
has puzzled researchers for decades.
If people don’t rob banks, there must
be some essential element of interaction that our simple model fails to capture. What is it? Is there some model in
which rational people don’t rob banks,
or must we abandon our assumption of
rationality? If you want to get a feel for
what it’s like to be a researcher in this
area, stop for a moment and compile a
list of things that bother you about our
simple model. Contemplate whether
addressing these concerns would intuitively advert our dire prediction,
and how these concerns might be incorporated mathematically. Question
whether the components you’ve now
incorporated are themselves realistic,
or whether the assumptions you’ve
made are so strong that they essentially assume what you want to derive.
Now play with your new mathematical
model and try to see what it predicts.
This is computer science and economics research, and this is what my colleagues and I do on a daily basis.
I now want to tell you about some answers for this paradox in the literature.
One clear flaw in our story with Cain
and Abel is that these games aren’t
played once. They are played many
times throughout our lives, and often
multiple times with the same partner.
In other words, the game is repeated
and this repetition causes people to
cooperate today so that their partners
will continue to cooperate in the future. Furthermore, these games aren’t
played with just one other partner, but
with an evolving set of partners, and
again this causes people to cooperate
so that others will want to play with
them. I will explain mathematically
how each of these intuitions solves the
paradox in our game.
Let’s consider the idea that games are
repeated. To model this, suppose that
every day Cain and Abel toss a fair coin.
If the coin comes up heads, they play
their bank-robber game. If it comes
up tails, they stop playing. This complicates matters significantly. In particular, Cain and Abel can tailor their
actions on the history of the game. If
Cain’s been nice in the past and cooperated, perhaps Abel is more likely
to cooperate. Consider a strategy in
which a player cooperates on day zero