Open research problem. What are
the constraints on when expertise
learned by one robot can simply be
copied, to become part of the expertise of another robot?
Hybrid decision architectures. Over
the centuries, morality and ethics have
been developed as ways to guide people to act in trustworthy ways. The
three major philosophical theories of
and virtue ethics—provide insights
into the design of a moral and ethical
decision architecture for intelligent robots. However, none of these theories
is, by itself, able to meet all of the demanding performance requirements
A hybrid architecture is needed, operating at multiple time-scales, drawing on aspects of all ethical theories:
fast but fallible pattern-directed responses; slower deliberative analysis of
the results of fast decisions; and, yet
slower individual and collective learning processes.
How can theories of philosophical
ethics help us understand how to design robots and other AIs to behave
well in our society?
Three major ethical theories.
Consequentialism is the philosophical position that the rightness or wrongness of
an action is defined in terms of its consequences. 34 Utilitarianism is a type of consequentialism that, like decision theory
and game theory, holds that the right action in a situation is the one that maximizes a quantitative measure of utility.
Modern theories of decisions and
games20 contribute the rigorous use of
probabilities, discounting, and expected
utilities for dealing with uncertainty in
perception, belief, and action.
Where decision theory tends to define utility in terms of individual reward, utilitarianism aims to maximize the overall welfare of everyone in
society. 13, 32 While this avoids some of
the problems of selfish utility functions,
it raises new problems. For example,
caring for one’s family can have lower
utility than spending the same resources to reduce the misery of distant strangers, and morally repellant actions can be
justified by the greater good. 19
A concise expected-utility model
supports efficient calculation. Howev-
er, it can be quite difficult to formu-
late a concise model by determining
the best small set of relevant factors.
In the field of medical decision-mak-
ing, 24 decision analysis models are
known to be useful, but are difficult
and time-consuming to formulate.
Setting up an individual decision
model requires expertise to enumer-
ate the possible outcomes, extensive
literature search to estimate proba-
bilities, and extensive patient inter-
views to identify the appropriate utility
measure and elicit the values of out-
comes, all before an expected utility cal-
culation can be performed. Even
then, a meaningful decision requires
extensive sensitivity analysis to deter-
mine how the decision could be af-
fected by uncertainty in the estimates.
While this process is not feasible for
making urgent decisions in real time,
it may still be useful for post-hoc
analysis of whether a quick decision
Deontology is the study of duty (deon
in Greek), which expresses morality and
ethics in terms of obligations and prohi-
bitions, often specified as rules and con-
straints such as the Ten Command-
ments or Isaac Asimov’s Three Laws of
Robotics. 4 Deontological rules and con-
straints offer the benefits of simplicity,
clarity, and ease of explanation, but
they raise questions of how they are jus-
tified and where they come from. 30 Rules
and constraints are standard tools for
knowledge representation and infer-
ence in AI, 29 and can be implemented
and used quite efficiently.
However, in practice, rules and con-
straints always have exceptions and
unintended consequences. Indeed,
most of Isaac Asimov’s I, Robot stories4
focus on unintended consequences and
necessary extensions to his Three Laws.
Virtue Ethics holds that the individ-
ual learns through experience and
practice to acquire virtues, much as
an expert craftsman learns skills, and
that virtues and skills are similarly
grounded in appropriate knowledge
about the world. 16, 37 Much of this
knowledge consists of concrete exam-
ples that illustrate positive and nega-
tive examples (cases) of virtuous be-
havior. An agent who is motivated to
be more virtuous tries to act more like
cases of virtuous behavior (and less
like the non-virtuous cases) that he
has learned. Phronesis (or “practical
wisdom”) describes an exemplary
state of knowledge and skill that sup-
ports appropriate responses to moral
and ethical problems.
A computational method suitable
for virtue ethics is case-based reason-
ing, 18, 22 which represents knowledge as
a collection of cases describing con-
crete situations, the actions taken in
those situations, and results of those
actions. The current situation is
matched against the stored cases, iden-
tifying the most similar cases, adapting
the actions according to the differenc-
es, and evaluating the actions and out-
comes. Both rule-based and case-based
reasoning match the current situation
(which may be very complex) against
stored patterns (rules or cases).
Virtue ethics and deontology differ
in their approach to the complexity of
ethical knowledge. Deontology assumes that a relatively simple abstraction (defined by the terms appearing
in the rules) applies to many specific
cases, distinguishing between right
and wrong. Virtue ethics recognizes
the complexity of the boundaries between ethical judgments in the space
Figure 2. Fractal boundaries.
Geometric fractal boundaries provide a metaphor for the complexity of the boundaries
between different ethical evaluations in the high-dimensional space of possible situations.
Simple boundaries can approximate the fractal set, but can never capture its shape exactly.