ger than a purely technical question.
These five areas—no killing, responsibility, transparency, privacy,
and bias—outline the general issues
that AI, left unchecked, will cause us no
end of harm. So it’s up to us to check it.
The Practical Application
So how would regulations on AI technologies work? Just like all the other
regulations and laws we have in place
today to protect us from exploding air
bags in cars, E. coli in our meat, and
sexual predators in our workplaces.
Instead of creating a new, single AI
regulatory body, which would probably
be unworkable, regulations should
be embedded into existing regulatory
infrastructure. Regulatory bodies will
enact ordinances, or legislators will enact laws to protect us from the negative
impacts of AI in applications.
Let’s look at this in action. Let’s say
I have a driverless car, which gets in an
accident. If it’s my car, I am considered
immediately responsible. There may
be technological defects that caused
the accident, in which case the manufacturer starts to share responsibility,
for whatever percentage of the defect
the manufacturer is responsible. So
driverless cars will be subject to the
same laws as people, overseen by Federal Motor Vehicle Safety Standards
and motor vehicle driving laws.
Some might ask: But what about
the trolley problem; How do we program the car to make a choice between hitting several people or just
killing the driver? That’s not an engineering problem, but a philosophical
thought experiment. In reality, driverless cars will reduce the numbers
of people hurt or killed in accidents;
the edge cases where someone gets
hurt because of a choice made by an
algorithm are a small percentage of
the cases. Look at Waymo, Google’s
autonomous driving division. It has
logged over two million miles on U.S.
streets and has only been at fault in
one accident, making its cars by far
the lowest at-fault rate of any driver
class on the road—approximately 10
times lower than people aged 60–69
and 40 times lower than new drivers.
Now, there are probably AI applications that will be introduced in the
future, that may cause harm, yet no existing regulatory body is in place. It’s
up to us as a culture to identify those
applications as early as possible, and
identify the regulatory agency to take
that on. Part of that will require us to
shift the frame through which we look
at regulations, from onerous bureaucracy, to well-being protectors. We
must recognize that regulations have
a purpose: to protect humans and society from harm. One place to start
having these conversations is through
such organizations as the Partnership
on AI, where Microsoft, Apple, and
other leading AI research organizations, such as the Allen Institute for
Artificial Intelligence, are collaborating to formulate best practices on AI
technologies and serve as an open
platform for discussion and engagement about AI and its influences on
people and society. The AI Now Institute at New York University and the
Berkman-Klein Center at Harvard
University are also working on developing ethical guidelines for AI.
The difficulty of regulating AI does
not absolve us from our responsibility
to control AI applications. Not to do so
would be, well, unintelligent.
Oren Etzioni ( email@example.com) is Chief Executive
Officer of the Allen Institute for Artificial Intelligence,
Seattle, WA, USA, and Professor of Computer Science at
the University of Washington.
Copyright held by author.
We must recognize
have a purpose:
to protect humans
Watch the author discuss
this work in the exclusive
For further information
or to submit your
on Spatial Algorithms
ACM TSAS is a new
scholarly journal that
papers on all aspects of
spatial algorithms and
systems and closely
related disciplines. It
has a multi-disciplinary
a large number of
areas where spatial
data is manipulated or
The journal is
committed to the
of research results
in the area of spatial
algorithms and systems.