The projects listed above are tentative
steps toward addressing the problems
facing Turkers and developing a richer
understanding of the structure and
dynamics of human computation markets. Many questions remain, including: How does database, interface, and
interaction design influence individual
outcomes and market equilibria?
For example, how would the worker
experience on Mechanical Turk be different if workers knew requesters’ rejection rates, or the effective wages of
HI Ts? This has been explored in online
auctions, especially eBay, but only tentatively in human computation (e.g.,
[ 6], which examines task search).
Another question is: What are the
economics of fraudulent tasks (
scam-ming and spamming)?
That is, how do scammers and
spammers make money on Mechanical Turk, and how much money do they
make? Work in this thread might draw
on existing research on the economics
of internet fraud (e.g., [ 7]) and could
yield insights to help make human
computation markets less hospitable
A third question is: What decision
logics are used by buyers and sellers in
human computation markets?
We might expect workers to minimize time spent securing payment on
each task, even if this means providing work they know is of low quality.
Some workers do behave this way. We
have found, however, that workers seem
more concerned with what is “fair” and
“reasonable” than with maximizing
personal earnings at requester expense.
The selfish optimizers that populate
the models of economic decision-mak-ing may not well describe these “
honest” workers, although as noted in [ 8]
they can perhaps be extended to do so.
So how do differently motivated actors
in human computation markets shape
market outcomes, and how can this
knowledge shape design?
Finally, we can ask: What’s fair in
Economists Akerlof and Shiller, in
their 2009 book Animal Spirits: Ho w Human Psychology Drives the Economy, and
Why It Matters for Global Capitalism, argue that “considerations of fairness are
a major motivator in many economic
decisions” that has been overlooked in
neoclassical explanations that assume
people act rationally: “while...there
is a considerable literature on what is
fair or unfair, there is also a tradition
that such considerations should take
second place in the explanation of economic events” (pp. 20, 25).
At public events we have heard Mechanical Turk requesters and administrators say tasks should be priced “
fairly,” but fairness is difficult to define and
thus to operationalize. The concept of
a reservation wage—the lowest wage
a worker will take for a given task—as
discussed in [ 9] is useful but not definitive: the global reach of human computation platforms complicates the social and cultural interpretation of the
The question of fairness links interface design to market outcomes. If
considerations of fairness are key to
explaining economic decision making,
but fairness is constructed and interpreted through social interaction, then
to understand economic outcomes in
human computation systems we need
an understanding of these systems
as social environments. Can systems
with sparse social cues motivate fair
interactions? Human computation
and Computer Supported Cooperative
Work may have much to learn from one
another on these topics.
This review of workers’ problems
should not be mistaken as an argument
that workers would be better off with-
out Mechanical Turk. An exchange in
late 2009 on the Turker Nation forum
makes the point concisely:
With Mechanical Turk, Amazon has
created work in a time of economic un-
certainty for many. Our aim here is not
to criticize the endeavor as a whole but
to foreground complexities and articu-
late desiderata that have thus far been
overlooked. Basic economic analysis
tells us that if two parties transact they
do so because it makes them both bet-
ter off. But it tells us nothing about the
conditions of the transaction. How
did the parties come to a situation in
which such a transaction was an im-
provement? When transactions are
conditioned by the intentional design
of systems, we have the opportunity to
examine those conditions.
M. Six Silberman is a field interpreter at the Bureau of
Economic Interpretation. He studies the relation between
environmental sustainability and human-computer
interaction. His website is wtf.tw.
Lilly Irani is a Ph D candidate in the Informatics department
at University of California-Irvine. She works at the
intersection of anthropology, science and technology
studies, and computer supported cooperative work.
Joel Ross is a PhD candidate in the Informatics department
at University of California-Irvine. He is currently designing
games to encourage environmentally sustainable behavior.
1. Silberman, M. S., etal. Sellers’ problems in human
computation markets. In Proceedings of HCOMP
2. Ross, J., et al. Who are the cro wd workers? shifting
demographics in Mechanical Turk. In Proceedings of
3. Ipeirotis, P. Mechanical Turk: the demographics.
4. Kochhar, S., et al. The anatomy of a large-scale human
computation engine. In Proceedings of HCOMP2010.
5. Felstiner, A. Working the crowd: employment and labor
law in the crowdsourcing industry. http://papers.ssrn.
6. Chilton, L., et al. Task search in a human computation
market. In Proceedings of HCOMP 2010.
7. Franklin, J., et al. An inquiry into the nature and
causes of the wealth of internet miscreants. In
Proceedings of CCS ‘07: 375-388.
8. Jain, S. and D. Parkes. The role of game theory in human
computation systems. In Proceedingsof HCOMP2009.
9. Horton, J. J. and L. Chilton. The labor economics of paid
crowdsourcing. ar Xiv:1001.0627v1 [cs: HC], 2010.