respond to the behavior of people and
how this human activity is captured,
processed, and managed raises significant ethical and privacy concerns. Often
at the core of these concerns is the manner in which people are separated from
data collected about them. Specifically,
in current infrastructures people are often unaware of the digital information
they bleed, how this information is processed, and the consequential effects
of the analytical inferences drawn from
this data. Consequently, people are at an
ethical disadvantage in managing their
relationship with the infrastructure as
they are largely unaware of the digital
consequences of their actions and have
no effective means of control or withdrawal. A HAC infrastructure will need
to be accountable to people, allowing
them to develop a richer and more bidirectional relationship with their data.
Developing an accountable infrastructure also responds to the call from
privacy researchers such as Nissenbaum24 to understand and support the
relationship between users and their
data. Indeed, her Contextual Integrity
theory frames privacy as a dialectic process between different social agents.
Others have built upon this point, suggesting a bidirectional relationship
needs to be embedded into the design
of services so they are recognized as
35 This suggests users
should have a significant element of
awareness and control in the disclosure
of their data to others25 and the use of
this data by software agents. Establishing such bi-directional relationships
also requires us to reframe our existing
approaches to the governance and management of human data.
Perhaps the most critical issues in
this regard relate to seeking permission
for the use of personal data within information systems. Current approaches adopt a transactional model where
users are asked at a single moment to
agree to an often quite complex set of
terms of conditions. This transactional
model is already being questioned in
the world of bio-ethics, with Manson
and O’Neill18 arguing for the need to
consider consent as much broader than
its current contractual conception. We
suggest that HACs will similarly need to
revisit the design principles of consent
and redress the balance of agency toward the users.
via network flows. In Proc. 23rd Int. Joint Conf.
on Artificial Intelligence, (Beijing, China, 2013b),
30. Reddy, S., Parker, A., Hyman, J., Burke, J., Estrin, D.
and Hansen, M. Image browsing, processing, and
clustering for participatory sensing. In Proc. 4th
Workshop on Embedded Networked Sensors, (Cork,
Ireland, 2007), 13–17.
31. Robu, V., Gerding, E.H., Stein, S., Parkes, D.C., Rogers,
A. and Jennings, N.R. An online mechanism for
multi-unit demand and its application to plug-in hybrid
electric vehicle charging. J. Artificial Intelligence
32. Rogers, A., Farinelli, A., Stranders, R. and Jennings,
N.R. Bounded approximate decentralised coordination
via the max-sum algorithm. Artificial Intelligence 175,
2 (2011), 730–759.
33. Scekic, O., Truong, H.-L. and Dustdar, S. Incentives and
rewarding in social computing. Commun. ACM 56, 6
34. Simpson, E., Roberts, S.J., Smith, A. and Lintott,
C. Bayesian combination of multiple, imperfect
classifiers. In Proc. 25th Conf. on Neural Information
Processing Systems, (Granada, Spain, 2011).
35. Steeves, V. Reclaiming the social value of privacy.
Lessons from the Identity Trail. I. Kerr, V. Steeves
and C. Lucock, eds. Oxford University Press, 2009,
36. Stone, P., Kaminka, G.A., Kraus, S. and Rosenschein,
J.S. Ad hoc autonomous agent teams: Collaboration
without pre-coordination. In Proc. 24th Conference on
Artificial Intelligence (2010).
37. Tambe, M. et al. Conflicts in teamwork: Hybrids to the
rescue” In Proc. 4th Int. Joint Conf. on Autonomous
Agents and Multiagent Systems, (Utrecht, The
Netherlands, 2005), 3-10.
38. Thaler, R.H. and Cass, R.S. Nudge: Improving Decisions
About Health, Wealth, and Happiness. Yale University
39. Tran- Thanh, L., Venanzi, M., Rogers, A. and Jennings,
N.R. Efficient budget allocation with accuracy
guarantees for crowdsourcing classification tasks. In
Proc. 12th Int. Conf. on Autonomous Agents and Multi-Agent Systems (St. Paul, MN, 2013), 901–908.
40. von Ahn, L. et al. recaptcha: Human-based character
recognition via web security measures. Science 321,
5895 (2008), 1465–1468.
41. von Ahn, L. and Dabbish, L. Labeling images with a
computer game. In Proc. SIGCHI Conf. on Human
Factors in Computing Systems, (Vienna, Austria,
42. Wooldridge, M. J. and Jennings, N.R. Intelligent
agents: Theory and practice. Knowledge Engineering
Review 10, 2 (1995) 115–152.
N.R. Jennings ( firstname.lastname@example.org) is the Regius
Professor of Computer Science in Electronics and
Computer Science at the University of Southampton
University, U.K., and a chief scientific adviser to the U.K.
L. Moreau ( email@example.com) is a professor of
computer science, head of the Web and Internet Science
group (WAIS), and deputy head (Research and Enterprise)
of Electronics and Computer Science at the University of
D. Nicholson ( firstname.lastname@example.org) is a senior industrial
scientist with BAE and a knowledge transfer officer for the
EPSRC-funded Human Agent Collectives project.
S. Ramchurn ( email@example.com) is a lecturer in
the Agents, Interaction, and Complexity Group (AIC),
Electronics and Computer Science at the University of
S. Roberts ( firstname.lastname@example.org) leads the Machine
Learning Research Group at Oxford, U.K. He is also a
Professorial Fellow of Somerville College and a faculty
member of the Oxford-Man Institute.
T. Rodden ( email@example.com) is a professor of computing
at the University of Nottingham and co-director of the
Mixed Reality Laboratory.
A. Rogers ( firstname.lastname@example.org) is a professor of computer
science in the Agents, Interaction and Complexity Research
Group in the School of Electronics and Computer Science at
the University of Southampton, U.K.
Copyright held by owners/authors. Publication rights
licensed to ACM. $15.00
1. Abowd, G. D., Ebling, M., Hung, G., Lei, H., and
Gellersen, H. W. Context-aware computing. IEEE
Pervasive Computing 1, 3 (2002) 22–23.
2. Ariely, D., Bracha, A. and Meier, S. Doing good or doing
well? Image motivation and monetary incentives in
behaving prosocially. American Economic Review 99, 1
3. Buneman, P., Cheney, J. and Vansummeren, S. On the
expressiveness of implicit provenance in query and
update languages. ACM Trans. Database Systems 33,
4 (2008), 1–47.
4. Dash, R.K., Parkes, D.C. and Jennings, N.R.
Computational mechanism design: A call to arms.
IEEE Intelligent Systems 18, 6 (2003) 40–47.
5. Ebden, M., Huynh, T.D., Moreau, L., Ramchurn, S.D.
and Roberts, S.J. Network analysis on provenance
graphs from a crowdsourcing application. In Proc. 4th
Int. Conf. on Provenance and Annotation of Data and
Processes. (Santa Barbara, CA, 2012), 168–182.
6. Fong, T., Nourbaksha, I. and Dautenhahn, K. A survey
of socially interactive robots. Robots and Autonomous
Systems 42 (2003), 143–166.
7. Gil, Y., Deelman, E., Ellisman, M., Fahringer, T., Fox, G.,
Gannon, D., Goble, C., Livny, M., Moreau, L. and Myers,
J. Examining the challenges of scientific workflows.
IEEE Computer 40, 12 (2007), 26–34.
8. Horvitz, E. Principles of mixed-initiative user
interfaces. In Proceedings of the SIGCHI conference
on Human Factors in Computing Systems (New York,
NY, 1999), 159–166.
9. Jennings, N.R. An agent-based approach for building
complex software systems. Commun. ACM 44, 4
(Apr. 2001) 35–41.
10. Kahneman, D. Maps of bounded rationality: Psychology
for behavioral economics. American Economic Review
93, 5 (2003) 1449–1475.
11. Kamar, E., Gal, Y. and Grosz, B. Modeling information
exchange opportunities for effective human-computer
teamwork. Artificial Intelligence Journal 195, 1 (2013)
14. Kifor, T. et al. Provenance in agent-mediated
healthcare systems. IEEE Intelligent Systems 21, 6,
15. Krause, A., Horvitz, E., Kansal, A. and Zhao, F. Toward
community sensing. In Proc. Int. Conf. on Information
Processing in Sensor Networks (St Louis, MO, 2008),
16. Luger, E. and Rodden, T. An informed view on consent
for UbiComp. In Proc. Int. Joint Conf. on Pervasive
and Ubiquitous Computing (2013), 529–538.
17. Maes, P. Agents that reduce work and information
overload. Commun. ACM 37, 7 (1994), 31–40.
18. Manson, N.C. and O’Neill, O. Rethinking informed
consent in bioethics. CUP, 2007.
19. Michalak , T., Aaditha, K. V., Szczepanski, P., Ravindran,
B. and Jennings, N.R. Efficient computation of the
Shapley value for game-theoretic network centrality.
J. AI Research 46 (2013), 607–650.
20. Moran, S., Pantidi, N., Bachour, K., Fischer, J.E.,
Flintham, M. and Rodden, T. Team reactions to voiced
agent instructions in a pervasive game. In Proc. Int.
Conf. on Intelligent User Interfaces, (Santa Monica,
CA, 2013), 371–382.
21. Moreau, L. The foundations for provenance on the
Web. Foundations and Trends in Web Science 2, 2–3
22. Moreau, L. et al. Prov-dm: The prov data model. W3C
Recommendation REC-prov-dm-20130430, World
Wide Web Consortium, 2013.
23. Naroditskiy, V., Rahwan, I., Cebrian, M. and Jennings,
N.R. Verification in referral-based crowdsourcing. PLoS
ONE 7, 10 (2012) e45924.
24. Nissenbaum, H. Privacy as contextual integrity.
Washington Law Review 79, 1 (2004), 119–158.
25. Palen, L. and Dourish, P. Unpacking privacy for a
networked world. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems
26. Paxton, M. and Benford, S. Experiences of participatory
sensing in the wild. In Proc. 11th Int. Conf. on
Ubiquitous Computing, (Orlando, FL, 2009), 265–274.
27. Rahwan, T., Ramchurn, S. D., Jennings, N. R. and
Giovannucci, A. An anytime algorithm for optimal
coalition structure generation. J. Artificial Intelligence
Research 34 (2009), 521–567.
28. Rahwan, I. et al. Global manhunt pushes the limits
of social mobilization. IEEE Computer 46, 4 (2013a)
29. Rahwan, T., Nguyen, T-D, Michalak, T., Polukarov,
M., Croitoru, M., Jennings, N.R. Coalitional games