International World Wide Web Conference, 2011.
18. Ghosh, A. and Mc Afee, P. Crowdsourcing with endogenous
entry. In Proceedings of the 21st International World Wide
Web Conference, 2012.
19. Giles, J. Computational social science: Making the links.
Nature 488, 7412 (2012).
20. Gneiting, T. and Raftery, A.E. Strictly proper scoring
rules, prediction, and estimation. J. American Statistical
Association 102, 477 (2007), 359–378.
21. Goel, A., Krishnaswamy, A., Sakshuwong, S. and
Aitamurto, T. Knapsack voting. Collective Intelligence,
2015.
22. Goldman, J. and Procaccia, A.D. Spliddit: Unleashing fair
division algorithms. SIGecom Exchanges 13, 2 (2014),
41–46.
23. Hanson, R. Combinatorial information market design.
Information Systems Frontiers 1 (2003), 105–119.
24. Ho, C.-J. and Vaughan, J. W. Online task assignment in
crowdsourcing markets. In Proceedings of the 26th AAAI
Conference on Artificial Intelligence, 2012.
25. Ho, C.-J., Jabbari, S. and Vaughan, J. W. Adaptive
task assignment for crowdsourced classification. In
Proceedings of the 26th International Conference on
Machine Learning, 2013.
26. Ho, C.-J., Slivkins, A., Suri, S. and Vaughan, J. W.
Incentivizing high quality crowdwork. In Proceedings of
the 24th International World Wide Web Conference, 2015.
27. Immorlica, N., Stoddard, G. and Syrgkanis, V. Social
status and badge design. In Proceedings of the 24th
International World Wide Web Conference, 2015.
28. Jain, S. and Parkes, D.C. A game-theoretic analysis of the
ESP Game. ACM Trans. Economics and Computation 1, 1
(2013), 3:1–3: 35.
29. Jain, S., Chen, Y. and Parkes, D.C. Designing incentives
for online question-and-answer forums. Games and
Economic Behavior 86 (2014), 458–474.
30. Karger, D., Oh, S. and Shah, D. Iterative learning for
reliable crowdsourcing systems. Advances in Neural
Information Processing Systems, 2011.
31. Kearns, M. Experiments in social computation. Commun.
ACM 55, 10 (Oct. 2012).
32. Kurokawa, D., Procaccia, A.D. and Shah, N. Leximin
allocations in the real world. In Proceedings of the 16th
ACM Conference on Economics and Computation, 2015.
33. Lambert, N.S., Langford, J., Vaughan, J. W., Chen, Y.,
Reeves, D., Shoham, Y. and Pennock, D.M. An axiomatic
characterization of wagering mechanisms. J. Economic
Theory 156 (2015), 389–416.
34. Lazer, D. et al. Computational social science. Science 323,
5915 (2009), 721–723.
35. Mason, W. and Watts, D. Financial incentives and the
“performance of crowds.” In Proceedings of the First
Human Computation Workshop, 2009.
36. Procaccia, A.D. Cake cutting: Not just child’s play.
Commun. ACM 56, 7 (July 2013), 78–87.
37. Slivkins, A. and Vaughan, J. W. Online decision making in
crowdsourcing markets: Theoretical challenges. ACM
SIGecom Exchanges 12, 2 (2013).
38. Ungar, L., Mellors, B., Satopää, V., Baron, J., Tetlock, P.,
Ramos, J. and Swift, S. The good judgment project: A
large-scale test of different methods of combining expert
predictions. AAAI Technical Report FS-12-06, 2012.
39. Waggoner, B. and Chen, Y. Output agreement
mechanisms and common knowledge. In Proceedings
of the 2nd AAAI Conference on Human Computation and
Crowdsourcing, 2014.
40. Yin, M., Chen, Y. and Sun, Y.-A. The effects of
performance-contingent financial incentives in online
labor markets. In Proceedings of the 27th AAAI
Conference on Artificial Intelligence, 2013.
Yiling Chen ( yiling@seas.harvard.edu) is Gordon McKay
Professor of Computer Science at Harvard University,
Cambridge, MA.
Arpita Ghosh ( arpitaghosh@cornell.edu) is an associate
professor of information science at Cornell University,
Ithaca, NY.
Michael Kearns ( mkearns@cis.upenn.edu) is a professor and
National Center Chair of Computer and Information Science
at the University of Pennsylvania, Philadelphia, PA.
Tim Roughgarden ( tim@cs.stanford.edu) is an associate
professor of CS at Stanford University, Stanford, CA.
Jennifer Wortman Vaughan ( jenn@microsoft.com) is a
senior researcher at Microsoft Research, New York, NY.
Copyright held by owners/authors.
Publication rights licensed to ACM. $15.00.
modeling choices, as well as mathematical results suggesting the most important agent characteristics to understand
via experimental research. It will be
important to understand and incorporate relevant research from psychology,
economics, sociology, and other fields.
For example, behavioral economics and
psychology provide insight into how humans respond to incentives.
Generalization. Most of the existing
mathematical work on social computing focuses on a single application.
What does the research on prediction
market design tell us about recommendation systems or citizen science?
Models will have the most potential
for impact if they incorporate reusable components, allowing results to
generalize to many systems. (This is
one motivation for the Crowdsourcing
Compiler discussed earlier.)
A related issue is the lack of consensus and understanding of the “core
social computing problems,” or even
if such a set of core problems exists.
Mathematical theories are typically
developed with one or more such core
problems in mind.
Such problems should capture challenges that span a wide range of applications and be robust to small changes
in the applications to be sure that they
are capturing something “real.” Clearly, the identification of such problems
requires a dialog between practitioners
building real systems and theoreticians
to identify the most pressing problems
requiring mathematical study.
Transparency, interpretability, and
ethical implications. One final challenge to overcome is the potential need
to make social computing algorithms
and models transparent and interpretable to the users of social computing
systems. Users are becoming increasingly sophisticated and are aware the
algorithms employed online impact
both their day-to-day user experience
and their privacy. When faced with the
output of an algorithm, many will question where this output came from and
why. It is already difficult to explain to
users why complex probabilistic algorithms and models produce the results they do, and this will only become
more difficult as algorithms integrate
human behavior to a larger extent.
The issue of algorithmic transpar-
ency is often tied to ethical concerns
such as discrimination and fairness.
Examining and avoiding the unintend-
ed consequences of opaque decisions
made by algorithms is a topic that has
been gaining interest in the machine
learning and big data communities.j
Such concerns will undoubtedly need
to be addressed in the context of social
computing as well.
Acknowledgments. We thank the
participants of the Visioning Workshop on Theoretical Foundations for
Social Computing for their contributions. We also thank Ashish Goel,
Vince Conitzer, David McDonald, David Parkes, and Ariel Procaccia for their
feedback.
j For example, see the series of recent workshops
on Fairness, Accountability, and Transparency
in Machine Learning ( http://www.fatml.org/).
References
1. Abernethy, J., Chen, Y. and Vaughan, J. W. Efficient market
making via convex optimization, and a connection to
online learning. ACM Trans. Economics and Computation
1, 2 (2013), Article 12.
2. Abernethy, J., Frongillo, R., Li, X. and Vaughan, J. W.
A general volume-parameterized market-making
framework. In Proceedings of the 15th ACM Conference
on Economics and Computation, 2014.
3. Barowy, D., Curtsinger, C., Berger, E.D. and McGregor,
A. Automan: A platform for integrating human-based
and digital computation. In Proceedings of the Object-Oriented Programming, Systems, Languages, and
Applications, 2012.
4. Brandt, F., Conitzer, V., Endriss, U., Lang, J. and Procaccia,
J.D., Eds. Handbook of Computational Social Choice.
Cambridge University Press. Forthcoming.
5. Chen, Y., Fortnow, L., Lambert, N., Pennock, D. and
Wortman, J. Complexity of combinatorial market makers.
In Proceedings of the 9th ACM Conference on Electronic
Commerce, 2008.
6. Coleman, J. Introduction to Mathematical Sociology. Free
Press of Glencoe, 1964.
7. Dasgupta, A. and Ghosh, A. Crowdsourced judgement
elicitation with endogenous proficiency. In Proceedings of
the 22nd International World Wide Web Conference, 2013.
8. de Clippel, G., Moulin, H. and Tideman, N. Impartial division
of a dollar. J. Economic Theory 139, 1 (2008), 176–191.
9. Dudík, M., Lahaie, S. and Pennock, D.M. A tractable
combinatorial market maker using constraint generation.
In Proceedings of the 13th ACM Conference on Electronic
Commerce, 2012.
10. Dudík, M., Lahaie, S. and Pennock, D.M and Rothschild, D.
A combinatorial prediction market for the U.S. elections.
In Proceedings of the 14th ACM Conference on Electronic
Commerce, 2013.
11. Dwork, C. and Roth, A. The algorithmic foundations of
differential privacy. Foundations and Trends in Theoretical
Computer Science 9, 34 (2014), 211–407.
12. Easley, D. and Ghosh, A. Incentives, gamification, and
game theory: An economic approach to badge design. In
Proceedings of the 14th ACM Conference on Electronic
Commerce, 2013.
13. Easley, D. and Ghosh, A. Behavioral mechanism design:
Optimal crowdsourcing contracts and prospect theory.
In Proceedings of the 16th ACM Conference on Economics
and Computation, 2015.
14. Ghosh, A. Game theory and incentives in human
computation. Handbook of Human Computation. P.
Michelucci, ed. Springer, 2014.
15. Ghosh, A. and Hummel, P. A game-theoretic analysis of
rank-order mechanisms for user-generated content. In
Proceedings of the 12th ACM Conference on Electronic
Commerce, 2011.
16. Ghosh, A. and Kleinberg, R. Behavioral mechanism design:
Optimal contests for simple agents. In Proceedings of the
15th ACM Conference on Economics and Computation, 2014.
17. Ghosh, A. and McAfee, P. Incentivizing high-quality
user generated content. In Proceedings of the 20th