an underlying game-theoretic or economic model, and all are conducted
via monetary incentives at the level of
individual subjects. They can thus be
viewed as experiments in behavioral
economics,
1 but taking place in (
artificial) social networks, an area of growing interest but with little prior experimental literature. In some cases we can
make detailed comparisons between
behavior and equilibrium predictions,
and find systematic (and therefore potentially rectifiable) differences, such
as networked instances of phenomena
like inequality aversion.
˲ Network science. Network Science
is itself an interdisciplinary and emerging area9, 25 that seeks to document
“universal” structural properties of
social and other large-scale networks,
and ask how they might form and influence network formation and dynamics. Our experiments can be viewed as
extending this line of questioning into
a laboratory setting with human subjects, and examining the ways in which
network structure influences human
behavior, strategies, and performance.
˲Computational social science.
While our experimental designs have
often emphasized collective problem
solving, it is an inescapable fact that
individual human subjects make up
the collective, and individual decision-making, strategies, and personalities
influence the outcomes. What are
these influences, and in what ways do
they matter? In many of our experiments there are natural and quantifiable notions of traits like stubbornness, stability, and cooperation whose
variation across subjects can be measured and correlated with collective
behavior and performance, and in turn
used to develop simple computational
models of individual behavior for predictive and explanatory purposes.
This article surveys our experiments
and results to date, emphasizing overall collective performance, behavioral
phenomena arising repeatedly across
different tasks, task- and network-spe-cific findings that are particularly striking, and the overall methodology and
analyses employed. It is worth noting at
the outset that one of the greatest challenges posed by this line of work has
been the enormous size of the design
space: each experimental session involves the selection of a collective prob-
While our
experimental
designs have
often emphasized
collective problem
solving, it is an
inescapable fact
that individual
human subjects
make up the
collective, and
individual decision-
making, strategies,
and personalities
influence the
outcomes.
lem, a set of network structures, their
decomposition into local interactions
and subject incentives, and values for
many other design variables. Early on
we were faced with a choice between
breadth and depth—that is, designing
experiments to try to populate many
points in this space, or picking very specific types of problems and networks,
and examining these more deeply over
the years. Since the overarching goal
of the project has been to explore the
broad themes and questions here, and
to develop early pieces of a behavioral
science of human computation in networked settings, we have opted for
breadth, making direct comparisons
between some of our experiments difficult. Clearly much more work is needed
for a comprehensive picture to emerge.
In the remainder of this article, I
describe the methodology of our experiments, including the system and
its GUIs, human subject methodology,
and session design. I then summarize
our experiments to date and remark
on findings that are common to all or
most of the different tasks and highlight more specific experimental results on a task-by-task basis.
Experimental methodology
All of the experiments discussed here
were held over a roughly six-year period, in a series of approximately two-hour sessions in the same laboratory of
workstations at the University of Pennsylvania. The experiments used an
extensive software, network and visualization platform we have developed
for this line of research, and which has
been used by colleagues at other institutions as well. In all experiments the
number of simultaneous subjects was
approximately 36, and almost all of the
subjects were drawn from Penn undergraduates taking a survey course on
the science of social networks.
12 Each
experimental session was preceded by
a training and demonstration period
in which the task, financial incentives,
and GUI were explained, and a practice
game was held. Sessions were closely
proctored to make sure subjects were
attending to their workstation and
understood the rules and GUI; under
no circumstances was advice on strategy provided. Physical partitions were
erected around workstations to ensure
subjects could only see their own GUI.