uations, it demonstrates its ability to
use its situational awareness and fast
reaction time to find “third ways” out
of Near Miss scenarios. Based on
post-hoc crisis analyses, whether the
outcome was success or failure, it may
be able to learn to identify upstream
decision points that will allow it to
avoid such crises in the first place.
Technological advances, particularly in the car’s ability to predict the
intentions and behavior of other
agents, and in the ability to anticipate
potential decision points and places
that could conceal a pedestrian, will
certainly be important to reaching this
level of behavior. We can be reasonably
optimistic about this kind of cognitive
and perceptual progress in machine
learning and artificial intelligence.
Since 94% of auto crashes are associated with driver error, 33 there will be
plentiful opportunities to demonstrate
trustworthiness in ordinary driving
and solvable Near Miss crises. Both society and the purchasers of self-driving
cars will gain substantially greater personal and collective safety in return for
slightly more conservative driving.
For self-driving cars sharing the
same ethical knowledge base, the behavior of one car provides evidence
about the trustworthiness of all others,
leading to rapid convergence.
Trust is essential for the successful
functioning of society. Trust is necessary for cooperation, which produces
the resources society needs. Morality,
ethics, and other social norms encourage individuals to act in trustworthy
ways, avoiding selfish decisions that
exploit vulnerability, violate trust, and
discourage cooperation. As we contemplate the design of robots (and other
AIs) that perceive the world and select
actions to pursue their goals in that
world, we must design them to follow
the social norms of our society. Doing
this does not require them to be true
moral agents, capable of genuinely taking responsibility for their actions.
Social norms vary by society, so robot behavior will vary by society as
well, but this is outside the scope of
The major theories of philosophi-
cal ethics provide clues toward the de-
sign of such AI agents, but a success-
ful design must combine aspects of all
theories. The physical and social envi-
ronment is immensely complex. Even
so, some moral decisions must be
made quickly. But there must also be a
slower deliberative evaluation proc-
ess, to confirm or revise the rapidly re-
sponding rules and constraints. At
longer time scales, there must be
mechanisms for learning new con-
cepts for virtues and vices, mediating
between perceptions, goals, plans,
and actions. The technical research
challenges are how to accomplish all
Self-driving cars may well be the
first widespread examples of trustwor-
thy robots, designed to earn trust by
demonstrating how well they follow so-
cial norms. The design focus for self-
driving cars should not be on the Dead-
ly Dilemma, but on how a robot’s
everyday behavior can demonstrate its
Acknowledgment. This work took
place in the Intelligent Robotics Lab in
the Computer Science and Engineering Division of the University of Michigan. Research of the Intelligent Robots
Lab is supported in part by grants from
the National Science Foundation (IIS-
1111494 and HS-1421168). Many
thanks to the anonymous reviewers.
1. Abney, K. Robotics, ethical theory, and metaethics: A
guide for the perplexed. Robot Ethics: The Ethical and
Social Implications of Robotics. P. Lin, K. Abney, and
G. A. Bekey, Eds. MI T Press, Cambridge, MA, 2012.
2. Anderson, M., Anderson, S.L. and Armen, C. An
approach to computing ethics. IEEE Intelligent
Systems 21, 4 (2006), 56–63.
3. Arkin, R.C. Governing Lethal Behavior in Autonomous
Robots. CRC Press, 2009.
4. Asimov, I. I, Robot. Grosset & Dunlap, 1952.
5. Axelrod, R. The Evolution of Cooperation.
Basic Books, 1984.
6. Bacharach, M., Guerra, G. and Zizzo, D.J. The self-fulfilling property of trust: An experimental study.
Theory and Decision 63, 4 (2007), 349–388.
7. Bostrom, N. Superintelligence: Paths, Dangers,
Strategies. Oxford University Press, 2014.
8. Bringsjord, S., Arkoudas, K. and Bello, P. Toward a
general logicist methodology for engineering ethically
correct robots. IEEE Intelligent Systems 21, 4 (2006),
9. Brynjolfsson, E. and McAfee, A. The Second Machine
Age. W. W. Norton & Co., 2014.
10. Burton, E., Goldsmith, J., Koenig, S., Kuipers, B., Mattei,
N. and Walsh, T. Ethical considerations in artificial
intelligence courses. AI Magazine, Summer 2017;
11. Castelfranchi, C. and Falcone, R. Principles of trust
for MAS: Cognitive anatomy, social importance, and
quantification. In Proceedings of the Int. Conf. Multi
Agent Systems, 1998, 72–79.
12. Dandekar, P., Goel, A., Wellman, M. P. and Wiedenbeck,
B. Strategic formation of credit networks. ACM Trans.
Internet Technology 15, 1 (2015).
13. Driver, J. The history of utilitarianism. The Stanford
Encyclopedia of Philosophy. E. N. Zalta, Ed., 2014.
14. Eno, W. P. The story of highway traffic control,
1899–1939. The Eno Foundation for Highway Traffic
Control, Inc. (1939); http://hdl.handle.net/2027/
15. Hardin, G. The tragedy of the commons. Science 162
16. Hursthouse, R. Virtue ethics. The Stanford
Encyclopedia of Philosophy. E.N. Zalta, Ed., 2013.
17. Johnson, N.D. and Mislin, A. A. Trust games: A meta-analysis. J. Economic Psychology 32 (2011), 865–889.
18. Kolodner, J. Case-Based Reasoning. Morgan
19. Le Guin, U. The ones who walk away from Omelas.
New Dimensions 3. R. Silverberg, Ed. Nelson
20. Leyton-Brown, K. and Shoham, Y. Essentials of Game
Theory. Morgan & Claypool, 2008.
21. Lin, P. The ethics of autonomous cars. The Atlantic
Monthly, (Oct. 8, 2013).
22. López, B. Case-Based Reasoning: A Concise
Introduction. Morgan & Claypool, 2013.
23. Malle, B.F., Scheutz, M., Arnold, T.H., Voiklis, J. T., and
Cusimano, C. Sacrifice one for the good of many?
People apply different moral norms to human and
robot agents. In Proceedings of ACM/IEEE Int. Conf.
Human Robot Interaction (HRI), 2015.
24. Pauker, S.G. and Kassirer, J.P. Decision analysis. New
England J. Medicine 316 (1987), 250–258.
25. Pinker, S. The Better Angels of Our Nature: Why
Violence Has Declined. Viking Adult, 2011.
26. Rand, D.G. and Nowak, M.A. Human cooperation.
Trends in Cognitive Science 17 (2013), 413–425.
27. Robinette, P., Allen, R., Li, W., Howard, A.M., and
Wagner, A.R. Overtrust of robots in emergency
evacuation scenarios. In Proceedings of ACM/IEEE
Int. Conf. Human Robot Interaction (2016), 101–108.
28. Rousseau, D.M., Sitkin, S.B., Burt, R.S., and Camerer,
C. Not so different after all: A cross-discipline view of
trust. Academy of Management Review 23, 3 (1998),
29. Russell, S. and Norvig, P. Artificial Intelligence: A
Modern Approach. Prentice Hall, 3rd edition, 2010.
30. Sandel, M.J. Justice: What’s the Right Thing To Do?
Farrar, Strauss and Giroux, 2009.
31. Scheutz, M. and Arnold, T. Feats without heroes:
Norms, means, and ideal robot action. Frontiers in
Robotics and AI 3, 32 (June 16, 2016), DOI: 10.3389/
32. Singer, P. The Expanding Circle: Ethics, Evolution, and
Moral Progress. Princeton University Press, 1981.
33. Singh, S. Critical reasons for crashes investigated in the
National Motor Vehicle Crash Causation Survey. Technical
Report DOT HS 812 115, National Highway Traffic Safety
Administration, Washington D.C., Feb. 2015.
34. Sinnott-Armstrong, W. Consequentialism.
The Stanford Encyclopedia of Philosophy. E. N. Zalta,
35. Solaiman, S.M. Legal personality of robots,
corporations, idols and chimpanzees: A quest for
legitimacy. Artificial Intelligence and Law 25, 2 (2017),
155–179; doi: 10.1007/s10506-016-9192-3.
36. Toulmin, S. The Uses of Argument. Cambridge
University Press, 1958.
37. Vallor, S. Technology and the Virtues: A Philosophical
Guide to a Future Worth Wanting. Oxford University
38. Wallach, W. and Allen, C. Moral Machines: Teaching
Robots Right from Wrong. Oxford University Press, 2009.
39. Wright, J. R. and Leyton-Brown, K. Level-0 meta-models
for predicting human behavior in games. In ACM
Conference on Economics and Computation, 2014.
40. Yildiz, M. Repeated games. 12 Economic Applications
of Game Theory, Fall 2012. MI T OpenCourse Ware.
Benjamin Kuipers ( email@example.com) is a professor of
computer science and engineering at the University of
Michigan, Ann Arbor, USA.
Copyright held by author.
Watch the author discuss
his work in this exclusive