acteristic desirable, as it will lower
the barrier to successfully complete
tasks as a new requester on the AMT
We should note, of course, that
these results do not take into consideration the effect of various factors.
For example, an established requester
is expected to have its tasks completed faster than a new requester that
has not established connections with
the worker community. A task with a
higher price will be picked up faster
than an identical task with lower price.
An image recognition task is typically
easier than a content generation task,
hence more workers will be available
to work on it and finish it faster. These
are interesting directions for future
research, as they can show the effect
of various factors when designing and
posting tasks. This can lead to a better understanding of the crowdsourcing process and a better prediction of
completion times when crowdsourcing various tasks.
Higher predictability means lower
risk for new participants. Lower risk
means higher participation and higher
satisfaction both for requesters and for
Panagiotis G. Ipeirotis is an associate professor at the
Department of Information, Operations, and Management
Sciences at Leonard N. Stern School of Business of New
York University. His recent research interests focus on
crowdsourcing. He received his PhD in computer science
from Columbia University in 2004, with distinction, and
has received two Microsoft Live Labs A wards, t wo best
paper awards (IEEE ICDE 2005, ACM SIGMOD 2006), two
runner up awards for best paper (JCDL 2002, ACM KDD
2008), and a CAREER award from the National Science
Foundation. This work was supported by the National
Science Foundation under Grant No. IIS-0643846.
We also observe that the activity is
still concentrated around small tasks,
with 90 percent of the posted HITs giving a reward of $0.10 or less. A next step
in this analysis is to separate the price
distributions by type of task and identify the “usual” pricing points for different types of tasks. This can provide
guidance to new requesters that do not
know whether they are pricing their
Finally, we presented a first analysis
of the dynamics of the AMT marketplace. By analyzing the speed of posting and completion of the posted HITs,
we can see that Mechanical Turk is a
price-effective task completion marketplace, as the estimated hourly wage
is approximately $5.
Further analysis will allow us to
get a better insight of “how things get
done” on the AMT market, identifying
elements that can be improved and
lead to a better design for the marketplace. For example, by analyzing the
waiting time for the posted tasks, we
get significant evidence that workers
are limited by the current user interface and complete tasks by picking
the HITs available through one of the
existing sorting criteria. This limitation leads to a high degree of unpredictability in completion times, a significant shortcoming for requesters
that want high degree of reliability. A
better search and discovery interface
(or perhaps a better task recommendation service, a specialty of Amazon.
com, can lead to improvements in the
efficiency and predictability of the
Further research is also necessary
in better predicting how changes in the
design and parameters of a task can affect quality and completion speed. Ideally, we should have a framework that
automatically optimizes all the aspects
of task design. Database systems hide
all the underlying complexity of data
management, using query optimizers to pick the appropriate execution
plans. Google Predict hides the complexity of predictive modeling by offering an auto-optimizing framework for
classification. Crowdsourcing can benefit significantly by the development of
similar framework that provide similar abstractions and automatic task
optimizations. DEVELOP THE FUTURE > > > > > > > > Opportunities in Research and Development. Develop advanced financial software that brings transparency and clarity to the markets. Accelerate the renowned Bloomberg Professional® service—our core platform with over 30,000 unique functions. Join a team that launches, on average, three new Bloomberg Professional® functions every day. A team that rewards talent, foresight and ingenuity. Connect influential decision makers across business, finance and government to a thriving network of news, data, people and ideas. ARE YOU THE NEW TYPE B? DRIVEN, RESOURCEFUL, BRILLIANT, RELENTLESS PURSUE YOUR OPPORTUNITY@ BLOOMBERG.COM/CAREERS
1. Mechanical Turk Monitor, http://www.mturk-tracker.
2. Barabási, A.-L. 2005. The origin of bursts and heavy
tails in human dynamics. Nature, 435:207-211.
3. Cobham, A. 1954. Priority assignment in waiting line
problems. J. Oper. Res. Sec. Am. 2, 70−76.
4. Chilton, L. B., Horton, J. J., Miller, R. C., and Azenkot, S.
2010. Task search in a human computation market. In
Proceedings of the ACM SIGKDD Workshop on Human
Computation ( Washington DC, July 25, 2010). HCOMP
‘ 10. ACM, New York, N Y, 1-9.
5. Ipeirotis, P. 2010. Demographics of Mechanical Turk.
working paper CeDER-10-01, New York University,
Stern School of Business. Available at
6. Ross, J., Irani, L., Silberman, M. S., Zaldivar, A., and
Tomlinson, B. 2010. Who are the crowd workers?:
shifting demographics in mechanical turk. In
Proceedings of the 28th of the international
Conference Extended Abstracts on Human Factors in
Computing Systems (Atlanta, Georgia, USA, April 10 -
15, 2010). CHI EA ‘ 10. ACM, New York, N Y, 2863-2872.
7. M/M/1 model, http://en.wikipedia.org/wiki/M/M/1_
© 2010 ACM 1528-4972/10/1200 $10.00
WHAT DO WE KNOW ABOUT
Our analysis indicates that the AMT
is a heavy-tailed market, in terms of
requester activity, with the activity of
the requesters following a log-normal
distribution. The top 0.1 percent of requesters amount for 30 percent of the
dollar activity and with 1 percent of the
requesters posting more than 50 percent of the dollar-weighted tasks.
A similar activity pattern also appears from the side of workers [ 6]. This
can be interpreted both positively and
negatively. The negative aspect is that
the adoption of crowdsourcing solutions is still minimal, as only a small
number of participants actively use
crowdsourcing for large-scale tasks.
On the other hand, the long tail of requesters indicates a significant interest for such solutions. By observing the
practices of the successful requesters,
we can learn more about what makes
crowdsourcing successful, and increase the demand from the smaller