blamed robots when they did not make
the utilitarian choice, and blamed humans when they did. Robinette et al27
found that human subjects will “
overtrust” a robot in an emergency situation, even in the face of evidence that
the robot is malfunctioning and that
its advice is bad.
Representing ethical knowledge as
cases. Consider a high-level sketch of a
knowledge representation capable of
expressing rich cases for case-based
reasoning, but also highly abstracted
“cases” that are essentially rules or constraints for deontological reasoning.
Let a situation S(t) be a rich description of the current context. “Rich”
means the information content of
S(t) is very high, and also that it is available in several hierarchical levels, not
just the lowest “pixel level” description
that specifies values for a large number
of low-level elements (like pixels in an
image). For example, a situation description could include symbolic descriptions of the animate participants
in a scenario, along with their individual characteristics and categories they
might belong to, the relations holding
among them, and the actions and
events that have taken place. These symbolic descriptions might be derived
from sub-symbolic input (for example, a
visual image or video) by methods such
as a deep neural network classifier.
A case 〈S, A, S′, v〉 is a description of a
situation S, the action A taken in that
situation, the resulting situation S′,
and a moral evaluation v (or valence) of
this scenario. A case representing ongoing experience will be rich, reflecting
the information-rich sensory input the
agent receives, and the sophisticated
processing that produces the hierarchical description. A case representing
the stored memory of events the agent
has experienced will be significantly
less rich. A “story” describing events
can also be represented as a case, but it
is less rich yet, consisting of a collection of symbolic assertions. An even
sparser and more schematic case is effectively the same as a rule, matching
certain assertions about a situation S,
and proposing an action A, the resulting situation S′, and perhaps the evaluation v of that scenario.
The antecedent situation S in a case
〈S, A, S′, v〉 need not describe a mo-
mentary situation. It can describe a
of possible scenarios (Figure 2), and
collects individual cases from the
agent’s experience to characterize
Understanding the whole elephant.
Utilitarianism, deontology, and virtue
ethics are often seen as competing,
mutually exclusive theories of the nature of morality and ethics. I treat them
here as three aspects of a more complex system for making ethical decisions (inspired by the children’s poem,
The Blind Men and the Elephant).
Rule-based and case-based reasoning (AI methods expressing key aspects of deontology and virtue ethics,
respectively) can, in principle, respond
in real time to the current situation.
Those representations also hold promise of supporting practical approaches
to explanation of ethical decisions. 36
After a decision is made, when time for
reflection is available, utilitarian reasoning can be applied to analyze
whether the decision was good or bad.
This can then be used to augment the
knowledge base with a new rule, constraint, or case, adding to the agent’s
ethical expertise (Figure 3).
Previous work on robot ethics.
Formal and informal logic-based ap-
proaches to robot ethics2, 3, 8 express a
“top-down” deontological approach
specifying moral and ethical knowl-
edge. While modal operators like oblig-
atory or forbidden are useful for ethical
reasoning, their problem is the diffi-
culty of specifying or learning critical
perceptual concepts (see Figure 2), for
example, non-combatant in Arkin’s ap-
proach to the Laws of War. 3
Wallach and Allen38 survey issues and
previous work related to robot ethics,
concluding that top-down approaches
such as deontology and utilitarianism
are either too simplistic to be adequate
for human moral intuitions, or too computationally complex to be feasibly implemented in robots (or humans, for
that matter). They describe virtue ethics
as a hybrid of top-down and bottom-up
methods, capable of naming and asserting the value of important virtues, while
allowing the details of those virtues to
be learned from relevant individual experience. They hold that emotions,
case-based reasoning, and connection-ist learning play important roles in ethical judgment. Abney1 also reviews ethical theories in philosophy, concluding
that virtue ethics is a promising model
for robot ethics.
Scheutz and Arnold31 disagree, holding that the need for a “
computationally explicit trackable means of decision making” requires that ethics be
grounded in deontology and utilitarianism. However, they do not adequately consider the overwhelming complexity of the experienced world, and the
need for learning and selecting concise
abstractions of it.
Recently, attention has been turned
to human evaluation of robot behavior.
Malle et al23 asked human subjects to
evaluate reported decisions by humans
or robots facing trolley-type problems
(“Deadly Dilemmas”). The evaluators
Figure 3. Feedback and time scales in a hybrid ethical reasoning architecture.
Given a situation S(t), a fast case-based reasoning process retrieves similar cases, defines the
action A to take, and results in a new situation S′. At a slower time scale, the result is evaluated
and the new case is added to the case base. Feedback through explanation, justification, and
communication with others takes place at approximately this slower time scale. Abstraction
of similar cases to rules and learning of new concepts and relations are at a much slower time
scale, and social evolution is far slower still.
Retrieve similar cases
Specify appropriate action