contributed articles
Doi: 10.1145/2347736.2347753
Human subjects perform a computationally
wide range of tasks from only local, networked
interactions.
By miChAEL KEARnS
Experiments
in Social
Computation
since 2005, We have conducted an extensive series of
behavioral experiments at the University of Pennsylvania
on the ability of human subjects to solve challenging
global tasks in social networks from only local,
distributed interactions. In these experiments, dozens
of subjects simultaneously gather in a laboratory
of networked workstations, and are given financial
incentives to resolve “their” local piece of some
collective problem, which is specified via individual
incentives and may involve aspects of coordination,
competition, and strategy. The underlying network
structures mediating the interaction are unknown to
the subjects, and are often chosen from well-studied
stochastic models for social network formation.
The tasks examined have been drawn from a wide
variety of sources, including computer science and
complexity theory, game theory and economics, and
sociology. They include problems as diverse as graph
coloring, networked trading, and biased voting. This
article surveys these experiments and their findings.
Our experiments are inherently interdisciplinary, and draw their formulations and motivations from a number
of distinct fields. Here, I mention some
of these related areas and the questions
they have led us to focus upon.
˲ Computer science. Within computer science there is current interest
in the field’s intersection with economics (in the form of algorithmic
game theory and mechanism design22),
including on the topic of strategic interaction in networks, of which our
experiments are a behavioral instance.
Within the broader technology community, there is also rising interest in
the phenomenon of crowdsourcing,
26
citizen science,
18 and related areas,
which have yielded impressive “point
solutions,” but which remains poorly
understood in general. What kinds of
computational problems can populations of human subjects (perhaps aided by traditional machine resources)
solve in a distributed manner from relatively local information and interaction? Does complexity theory or some
variant of it provide any guidance? Our
experiments have deliberately examined a wide range of problems with
varying computational difficulty and
strategic properties. In particular, almost all the tasks we have examined
entail much more interdependence
between user actions than most crowdsourcing efforts to date.
˲ Behavioral economics and game
theory. Many of our experiments have
key insights
Groups of human subjects are able
to solve challenging collective tasks
that require considerably more
interdependence than most fielded
crowdsourcing systems exhibit.
in its current form, computational
complexity is a poor predictor of the
outcome of our experiments. Equilibrium
concepts from economics are more
appropriate in some instances.
the possibility of Web-scale versions
of our experiments is intriguing,
but they will present their own special
challenges of subject recruitment,
retention, and management.