moral values. One kind includes values the community holds, which are
of particular importance and hence
their implementation cannot be left
to individual choice; heeding them is
hence enforced by coercive means, by
the law. These values include a ban on
murder, rape, theft and so on. In the
AI world, heeding these is the subject
of a variety of AI Guardians, outlined
earlier. The second kind of values
concern moral choices the community hold it can leave to each person
to decide whether or not to follow.
These values include whether or not
to donate an organ, give to charity,
volunteer, and so on. These are implemented in the AI world by ethics bots.
The question of who will guard
the guardians rises. Humans should
have the ultimate say about the roles
and actions of both the AI operational
and AI oversight systems; indeed, all
these systems should have an on and
off switch. None of them should be
completely autonomous. Ultimately, however smart a technology may
become, it is still a tool to serve human purposes. Given that those who
build and employ these technologies
are to be held responsible for their
programming and use, these same
people should serve as the ultimate
authority over the design, operation,
and oversight of AI.
1. Burrell, J. How the machine ‘thinks’: Understanding
opacity in machine learning algorithms. Big Data &
Society 3, 1 (2016).
2. Etzioni, A. and Etzioni, O. AI assisted ethics. Ethics and
Information Technology 18, 2 (2016), 149–156; http://
3. Kapnan, C. Auto-braking: A quantum leap for road safety.
The Telegraph, (Aug. 14, 2012); http://bit.ly/2917jog.
4. Limer, E. Automatic brakes are stopping for no good
reason. Popular Mechanics, (June 19, 2015); http://bit.
5. Mayer-Schönberger, V. and Cukier, K. Big Data: A
Revolution That Will Transform How We Live, Work,
and Think. 2014, 16–17.
6. New algorithm lets autonomous robots divvy up
assembly tasks on the fly. Science Daily, (May 27,
7. Phelan, M. Automatic braking coming, but not all
systems are equal. Detroit Free Press, (Jan. 1, 2016);
8. Weld, D. and Etzioni, O. The First Law of Robotics (a
call to arms). In Proceedings of AAAI ’ 94. AAAI,
Amitai Etzioni ( email@example.com) is a University Professor
of Sociology at The George Washington University,
Washington, D. C.
Oren Etzioni ( firstname.lastname@example.org) is CEO of the Allen
Institute for Artificial Intelligence, Seattle, WA, and
a Professor of Computer Science at the University of
Copyright held by authors.
in the car. And—if they should wake
up a passenger in the back seat if they
“see” an accident.
Several ideas have been suggested
as to where AI systems may get their
ethical bearings. In a previous publication, we showed that asking each user
of these instruments to input his or her
ethical preferences is impractical, and
that drawing on what the community
holds as ethical is equally problematic.
We suggested that instead one might
draw on ethics bots.
An ethics bot is an AI program that
analyzes many thousands of items of
information—not only information
publicly available on the Internet but
also information gleaned from a person’s own computers about the acts of
a particular individual that reveal that
person’s moral preferences. And then
uses these to guide the AI operational
systems (for instruments used by individuals, such as the driverless cars).
Essentially, what ethics bots do for
moral choices is similar to what AI programs do when they ferret out consumer preferences and target advertising
accordingly.i In this case, though, the
bots are used to guide instruments that
are owned and operated by the person,
in line with their values—rather than
by some marketing company or (
political campaign). For instance, such
an ethics bot may instruct a person’s
financial program to invest only in socially responsible corporations, and in
particular green ones, and make an annual donation to the Sierra Club, based
on the bot’s reading of the person’s
In short, there is no reason for the
digital world to become nearly as hierarchical as the non-digital one. However, the growing AI realm is overdue
for some level of guidance to ensure AI
operational systems will act legally and
observe the moral values of those who
own and operate them.
It is not necessarily the case that AI
guardians are more intelligent than
the systems they oversee. Rather, the
guardians need to be sufficiently ca-
i Ted Cruz’s campaign in Iowa relied on psychological profiles to determine the best ways to
canvass individual voters in the state. T. Hamburger, “Cruz campaign credits psychological
data and analytics for its rising success,” The
Washington Post (Dec. 13, 2015); http://wapo.
pable and intelligent that they are
not outwitted or short-circuited by
the systems they are overseeing. Consider, for example, an electrical circuit
breaker in a home: it is far less sophisticated than the full electrical system
(and associated appliances) but it is
quite reliable, and can be “tripped” by
a person in an emergency.
AI researchers can work toward this
vision in at least three ways. First, they
can attempt to formalize our laws and
values following an approach akin to
that outlined in the work on formalizing the notion of “harm.”
researchers can build labeled datasets
identifying ethical and legal conundrums labeled by desired outcomes,
and provide these as grist for machine
learning algorithms. Finally, researchers can build “AI operating systems”
that facilitate off switches as in the work
on “safely interruptible agents” in reinforcement learning.j Our main point is
that we need to put AI guardians on the
research agenda for the field.
Who Will Guard the AI Guardians?
There are two parts to this question.
One aspect concerns who will decide
which AI oversight systems will be
mobilized to keep in check the operational ones. Some oversight systems will be introduced by the programmers of the software involved at
the behest of the owners and users of
the particular technologies. For example, those who manufacture driverless cars and those who use them
will seek to ensure that their cars
will not speed ever more. This is a
concern as the cars’ operational systems—which, to reiterate, are learning systems—will note that many
traditional cars on the road violate
the speed limits. Other AI oversight
systems will be employed by courts
and law enforcement authorities. For
instance, in order to determine who
or what is liable for accidents, and
whether or not there was intent.
Ethics bots are a unique AI Guardian from this perspective. They are to
heed the values of the user, not the
owner, programmer, or those promoted by the government. This point
calls for some explanation. Communities have two kinds of social and
j See http://bit.ly/1RVn TA1