are both sensitive to the impact of actions on trust and long-term cooperation, and efficient enough to allow robots to make decisions in real time?
Making Robots Trustworthy
Performance demands of social norms.
Morality and ethics (and certain conventions) make up the social norms
that encourage members of society to
act in trustworthy ways. Applying these
norms to the situations that arise in
our complex physical and social environment imposes demanding performance requirements.
Some moral and ethical decisions
must be made quickly, for example
while driving, leaving little time for
At the same time, the physical and
social environment for these decisions is extremely complex, as is the
agent’s current perception and past
history of experience with that environment. Careful deliberation and
discernment are required to identify
the critical factors that determine the
outcome of a particular decision.
Metaphorically (Figure 2), we can
think of moral and ethical decisions
as defining sets in the extremely high-dimensional space of situations the
agent might confront. Simple abstractions only weakly approximate
the complexity of these sets.
Across moral and non-moral domains, humans improve their expertise by learning from personal experience, by learning from being told, and
by observing the outcomes when others face similar decisions. Children
start with little experience and a small
number of simple rules they have
been taught by parents and teachers.
Over time, they accumulate a richer
and more nuanced understanding of
when particular actions are right or
wrong. The complexity of the world
suggests the only way to acquire adequately complex decision criteria is
Robots, however, are manufactured
artifacts, whose computational state can
be stored, copied, and retrieved. Even if
mature moral and ethical expertise can
only be created through experience and
observation, it is conceivable this expertise can then be copied from one robot
to another sufficiently similar one, unlike what is possible for humans.
games. 40 These approaches may be useful steps, but they are inadequate for
real-world decision-making because
they assume simplified interactions
such as infinite repetitions of a single
economic game, as well as being expensive in knowledge and computation.
Social norms, including morality,
ethics, and conventions like driving on
the right side of the street, encourage
trust and cooperation among members of society, without individual negotiated agreements. We trust others
to obey traffic laws, keep their promises, avoid stealing and killing, and follow the many other norms of society.
There is vigorous discussion about the
mechanisms by which societies encourage cooperation and discourage
free riding and other norm violations. 26
Intelligent robots may soon participate in our society, as self-driving cars,
as caregivers for elderly people or children, and in many other ways. Therefore, we must design them to understand and follow social norms, and to
earn the trust of others in the society. If
a robot cannot behave according to the
responsibilities of being a member of
society, then it will be denied access to
At this point in history, only the humans involved—designer, manufacturer, or owner—actually care about
this loss of opportunity. Nonetheless,
this should be enough to hold robots
to this level of responsibility. It remains unclear whether robots will
ever be able to take moral or legal responsibility for their actions, in the
sense of caring about suffering the
consequences (loss of life, freedom,
resources, or opportunities) of failing
to meet these responsibilities. 35
Since society depends on cooperation, which depends on trust, if robots
are to participate in society, they must
be designed to be trustworthy. The
next section discusses how we might
Open research problem. Can computational models of human moral and
ethical decision-making be created, including moral developmental learning? Moral psychology may benefit
from such models, much as they have
revolutionized cognitive and perceptual psychology.
Open research problem. Are there
ways to formulate utility measures that
participate in our
society, as self-
driving cars, as
elderly people or
children, and in
many other ways.
We must design
them to understand
and follow social
norms, and to earn
the trust of others