vacy issues will only grow, and a more
stringent privacy policy will become
necessary to protect people and their
information from bad actors.
And the fifth and last general rule
of AI application regulation is that
AI must not increase any bias that already exists in our systems. Today, AI
uses data to predict the future. If the
data says, (in a hypothetical example),
that white people default on loans at
rates of 60%, compared with only 20%
of people of color, that race information is important to the algorithm.
Unfortunately, predictive algorithms
generalize to make predictions, which
strengthens the patterns. AI is using
the data to protect the underwriters,
but in effect, it is institutionalizing
bias into the underwriting process,
and is introducing a morally reprehensible result. There are mathematical methods to ensure algorithms do
not introduce extra bias; regulations
must ensure those methods are used.
A related issue here is that AI, in all
its forms (robotic, autonomous systems,
embedded algorithms), must be accountable, interpretable, and transparent so
that people can understand the decisions
machines make. Predictive algorithms
can be used by states to calculate future
risk posed by inmates and have been
used in sentencing decisions in court
trials. AI and algorithms are used in decisions about who has access to public
services and who undergoes extra scrutiny by law enforcement. All of these applications pose thorny questions about
human rights, systemic bias, and perpetuating inequities.
This brings up one of the thorniest
issues in AI regulation: It is not just a
technological issue, with a technological fix, but a sociological issue that requires ethicists and others to bring
their expertise to bear.
AI, particularly deep learning and
machine reading, is really about big
data. And data will always bear the
marks of its history. When Google is
training its algorithm to identify some-
thing, it looks to human history, held
in those data sets. So if we are going to
try to use that data to train a system, to
make recommendations or to make
autonomous decisions, we need to be
deeply aware of how that history has
worked and if we as a society want that
outcome to continue. That’s much big-
devices, and more. The technology
is progressing so fast, and gets inte-
grated into our lives so quickly, that
the line between dumb and smart
machines is inevitably fuzzy.
Even the concept of “harm” is diffi-
cult to put into an algorithm. Self-driving
cars have the potential to sharply reduce
highway accidents, but AI will also cause
some accidents, and it’s easier to fear the
AI-generated accidents than the human-
generated ones. “Don’t stab people”
seems pretty clear. But what about giving
children vaccinations? That’s stabbing
people. Or let’s say I ask my intelligent
agent to reduce my hard disk utilization
by 20%. Without common sense, the AI
might delete one’s not-yet-backed-up
Ph.D. thesis. The Murphy’s Law of AI is
that when you give it a goal, it will do it,
whether or not you like the implications
of it achieving its goal (see the Sorcerer’s
Apprentice). AI has little common sense
when it comes to defining vague con-
cepts such as “harm,” as co-author Dan-
iel Weld and I first discussed in 1994.a
But given that regulation is diffi-
cult, yet entirely necessary, what are
the broad precepts we should use to
thread the needle between too much,
and not enough, regulation? I suggest
five broad guidelines for regulating AI
applications.b Existing regulatory bod-
ies, such as the Federal Trade Com-
mission, the SEC, Homeland Security,
and others, can use these guidelines to
focus their efforts to ensure AI, in ap-
plication, will not harm humans.
Five Guidelines for Regulating
AI Applications
The first place to start is to set up regulations against AI-enabled weaponry
and cyberweapons. Here is where I
agree with Musk: In a letter to the
United Nations, Musk and other technology leaders said, “Once developed,
[autonomous weapons] will permit
armed conflict to be fought at a scale
greater than ever, and at timescales
faster than humans can comprehend.
These can be weapons of terror, weapons that despots and terrorists use
against innocent populations, and
a Weld, D. and Etzioni, O. The First Law of Robotics (A Call to Arms), Proceedings of AAAI, 1994.
b I introduced three of these guidelines in a New
York Times op-ed in September 2017; https://
nyti.ms/2exsUJc
weapons hacked to behave in unde-
sirable ways.” So as a start, we should
not create AI-enabled killing ma-
chines. The first regulatory principle
is: “Don’t weaponize AI.”
Now that the worst case is handled,
let’s look at how to regulate the more
benign uses of AI.
The next guideline is an AI is subject to the full gamut of laws that apply
to its human operator. You can’t claim,
like a kid to his teacher, the dog ate my
homework. Saying “the AI did it” has
to mean that you, as the owner, operator, or builder of the AI, did it. You are
the responsible party that must ensure
your AI does not hurt anyone, and if it
does, you bear the fault. There will be
times when it is the owner of the AI at
fault, and at times, the manufacturer,
but there is a well-developed body of
existing law to handle these cases.
The third is that an AI shall clearly
disclose that it is not human. This
means Twitter chat bots, poker bots,
and others must identify themselves as
machines, not people. This is particularly important now that we have seen
the ability of political bots to comment
on news articles and generate propaganda and political discord. c
The fourth precept is that AI shall
not retain or disclose confidential
information without explicit prior
approval from the source. This is a
privacy necessity, which will protect
us from others misusing the data collected from our smart devices, including Amazon Echo, Google Home, and
smart TVs. Even seemingly innocuous
house-cleaning robots create maps
that could potentially be sold. This
suggestion is a fairly radical departure from the current state of U.S. data
policy, and would require some kind
of new legislation to enact, but the pri-
c See T. Walsh, Turing’s Red Flag. Commun. ACM
59, 7 (July 2016), 34–37.
A problem with
regulating AI
is that it is difficult
to define what AI is.