What if this time and energy were also
channeled toward solving computational problems and training AI algorithms?
People playing GWAPs22–25 perform
basic tasks that cannot be automated. The ESP Game, 22 a.k.a. the Google
Image Labeler ( images.google.com/
imagelabeler/), is a GWAP in which
people provide meaningful, accurate
labels for images on the Web as a side
effect of playing the game; for example,
an image of a man and a dog is labeled
“dog,” “man,” and “pet.” The game is
fast-paced, enjoyable, and competitive; as of July 2008, 200,000 players
had contributed more than 50 million
labels; try it yourself at www.gwap.com.
These labels can be used to improve
Web-based image search, which typically involves noisy information (such
as filenames and adjacent text). Rather
than using computer-vision techniques
that do not work well enough, the ESP
Game constructively channels its players to do the work of labeling images in
a form of entertainment.
Other GWAPs include Peekaboom, 25 which locates objects within
images (and has been played more
than 500,000 human-hours); Phetch, 23
which annotates images with descriptive paragraphs; and Verbosity, 24 which
collects commonsense facts in order to train reasoning algorithms. In
each, people play not because they are
personally interested in solving an instance of a computational problem but
because they wish to be entertained.
The ESP Game, introduced in 2003,
and its successors represent the first
seamless integration of game play and
computation. How can this approach
be generalized? Our experience building and testing GWAPs with hundreds
of thousands of players has helped us
spell out general guidelines for GWAP
development. Here, we articulate
three GWAP game “templates” representing three general classes of games
containing all the GWAPs we’ve created to date. They can be applied to any
computational problem to construct a
game that encourages players to solve
problem instances. Each template defines the basic rules and winning conditions of a game in a way that is in the
players’ best interest to perform the intended computation. We also describe
a set of design principles that comple-
People play not
because they
are personally
interested in solving
an instance of a
computational
problem but
because they wish
to be entertained.
ment the basic game templates. While
each template specifies the fundamental structure for a class of games,
the general design principles make
the games more enjoyable while improving the quality of the output produced by players. Finally, we propose a
set of metrics defining GWAP success
in terms of maximizing the utility obtained per human-hour spent playing
the game.
Related Work
Though previous research recognized
the utility of human cycles and the motivational power of gamelike interfaces, none successfully combined these
concepts into a general method for
harnessing human processing skills
through computer games.
Networked individuals accomplishing work. Some of the earliest examples
of networked individuals accomplishing work online, dating to the 1960s,
were open-source software-develop-ment projects. These efforts typically
involved contributions from hundreds,
if not thousands, of programmers
worldwide. More recent examples of
networked distributed collaboration
include Wikipedia, by some measures
equal in quality to the Encyclopaedia
Britannica. 6
The collaborative effort by large
numbers of networked individuals
makes it possible to accomplish tasks
that would be much more difficult,
time consuming, and in some cases
nearly impossible for a lone person or
for a small group of individuals to do
alone. An example is the recent Amazon Mechanical Turk system (
developed in 2005, www.mturk.com/mturk/
welcome) in which large computational tasks are split into smaller chunks
and divvied up among people willing
to complete small amounts of work for
some minimal amount of money.
Open Mind Initiative. The Open
Mind Initiative18, 19 is a worldwide research endeavor developing “
intelligent” software by leveraging human
skills to train computers. It collects information from regular Internet users,
or Netizens, and feeds it to machine-learning algorithms. Volunteers participate by providing answers to questions computers cannot answer (such
as “What is in this image?”), aiming to
teach computer programs common-