could confront the problem of overtrust, resulting in robots that are more
transparent—allowing people to more
fully understand and learn how the
technology will behave. Mental modeling research may also provide insight
into techniques that facilitate better
communication between robots and
humans, and thereby allow each party
to more accurately calibrate the risks
associated with the interaction. For example, an alert could inform human
drivers of autonomous vehicles that
there is increased uncertainty emerging from an upcoming traffic condition, such as a left-hand turn, and suggest they deactivate the autopilot mode.
It is a type of design pathway that some
car companies are already exploring. 5
1. Booth, S. et al. Piggybacking robots: Human-robot
overtrust in university dormitory security. In
Proceedings of the 2017 ACM/IEEE International
Conference on Human-Robot Interaction, HRI ‘ 17,
ACM, New York, NY, USA, 2017, 426–434.
2. Borenstein, J., Howard, A., and Wagner, A.R. Pediatric
robotics and ethics: The robot is ready to see you now
but should it be trusted?” Robot Ethics 2.0, P. Lin, K.
Abney, G. Bekey, Eds., Oxford University Press, 2017.
3. Boudette, N.E. Tesla’s self-driving system cleared in
deadly crash. The New York Times (Jan. 2017).
4. Gunning, D. Explainable artificial intelligence (XAI).
Defense Advanced Research Projects agency; https://
5. Lee, T.B. Car companies’ vision of a gradual transition
to self-driving cars has a big problem. Vox (July 5,
6. Mosier, K.L., Palmer, E.A., and Degani, A. Electronic
checklists: Implications for decision making. In
Proceedings of the Human Factors Society 36th Annual
Meeting, Human Factors Society, Santa Monica, CA,
7. Parasuraman, R. and Riley, V. Humans and automation:
Use, misuse, disuse, abuse. Human Factors 39, 2 (Feb.
8. Plummer, L. Tesla could stop you using autopilot in its
cars—But only if you take your hands off the wheel.
Mirror (Aug. 30, 2016); https://bit.ly/2uJb3D6
9. Robinette, P., Howard, A. and Wagner, A.R. A
conceptualizing overtrust in robots: Why do people
trust a robot that previously failed? In Autonomy and
Artificial Intelligence, W. Lawless, R. Mittu, D. Sofge,
and S. Russell, Eds., Springer, 2017.
10. Schneider, D. Robin Murphy: Roboticist to the rescue.
IEEE Spectrum (Feb. 1, 2009); https://bit.ly/2L74Z2a
11. SoftBank Robotics. Who is Pepper?; https://bit.
Alan R. Wagner ( firstname.lastname@example.org) is an assistant
professor in the Department of Aerospace Engineering and
a research associate in the Rock Ethics Institute at The
Pennsylvania State University, University Park, PA, USA.
Jason Borenstein ( email@example.com) is the
Director of Graduate Research Ethics Programs and
Associate Director of the Center for Ethics and Technology
within the School of Public Policy and Office of Graduate
Studies at the Georgia Institute of Technology, Atlanta,
Ayanna Howard ( firstname.lastname@example.org) is Professor
and Linda J. and Mark C. Smith Endowed Chair in
Bioengineering in the School of Electrical and Computer
Engineering at the Georgia Institute of Technology,
Atlanta, GA, USA.
Copyright held by authors.
qualitative nature of those accidents.
Imagine a scenario in which an autonomous car fails to perceive obstacles in its
path due to a sensor failure. Such a failure might cause the system to run into,
over, and through items until the accumulated damage to the system is so great
the car can no longer move. Consider the
magnitude of harm if the case involved
an autonomous commercial truck driving into and through a shopping mall.
Overtrust influences people to tolerate risks they would not normally accept
and may exacerbate problematic behavior such as inattentiveness while driving. The availability of an autopilot may
incline people to eat, drink, or watch a
movie while sitting behind the wheel,
even if the system is incapable of dealing with an emergency should one arise.
Parents may send their kids without supervision for a ride to a grandparent’s
house. These may be reasonable actions
if the chances of a driving accident are
extremely low. But that is unlikely to be
a safe assumption at the present time.
As the adoption of robotic technologies increases, methods for mitigating
overtrust will require a multifaceted approach beginning with the design process. Since users might not utilize the
technology in the ways designers intend,
a recommendation to consider, at least
in some cases, is to avoid features that
may nudge users toward anthropomorphizing robots. Anthropomorphization
can induce a false sense of familiarity
in users, resulting in the expectation of
human-like responses when in fact the
associated risk may be much higher.
Mitigating overtrust may require
the robot to have the ability to model
the behavioral, emotive, and/or atten-
tional state of the person with whom
it interacts. For certain types of robots,
potentially including some brands of
self-driving cars, the system may need
the ability to recognize if the user is
paying attention or is distracted. Ro-
bots entrusted with the safety of hu-
man lives may also need to be able to
detect certain characteristics about
those lives. This can include whether
the user is a child, or whether the user
has any physical or mental impairment
that may increase the risk in the cur-
rent situation. For example, if a young
child is left alone in a self-driving car,
the system might need to be diligent
and proactive about preventing certain
kinds of harms, such as by monitoring
the temperature of the interior cabin
or warning an adult if the child is left
alone for too long.
Future systems and contemporary
research have begun to focus on robots
that recognize and react to human be-
havioral, emotive, and attentional states.
Softbank Robotics, for example, claims
that its Pepper robot can recognize emo-
tions and facial expressions and use this
information to determine the mood of
the person with whom it is interacting. 11
Presumably the same or a similar kind
of approach could be applied to high
risk situations. Future robots might,
and perhaps should, be able to gener-
ate information about the person’s
attentive state and make behavioral
predictions. While such predictions
can of course be mistaken, this kind
of information could be used to detect
and, ideally, help prevent overtrust.
Transparency about how robots
function is also critical for preventing
overtrust. In order for people to be informed users, they need the opportunity to become familiar with the ways
in which a robot may fail. DARPA and
other entities have made significant investments in research projects (such as
Explainable AI) that focus on creating
systems that can explain their behavior
to people in an understandable way. 4
Applied to autonomous vehicles, for
example, the system would be able to
warn users of driving situations that it
may not be able to handle or has little
Overall, we believe that significant
research in many areas, including on
mental modeling and theory of mind,
people to tolerate
risks they would
not normally accept
and may exacerbate