how it will behave, so the last place to
put AI systems that are trained on data
in the environment is the battlefield,
which is already a chaotic place.
“The real challenge is ensuring
good outcomes of AI, but unexpected
outcomes could be good or bad, and
that is for us to decide.”
Other perils identified by AI re-
searchers in this space include uni-
lateral use of autonomous weapons
to support asymmetric warfare, the
potential unpredictability of weapon
behavior particularly where multiple
systems interact as swarms, and the
unimaginable human and material de-
struction that could result from terror-
ist use of such weapons.
Looking at the ethical issues of
LAWS, Eric Schwitzgebel, professor
of philosophy at University of California, Riverside with research interests
in philosophy of mind and moral psychology, discusses AI-based systems
as objects of moral concern and questions whether AI could become sophisticated enough to be conscious.
Schwitzgebel acknowledges that
such a scenario is unlikely in the short
term, but says it could be possible to
create an autonomous system capable
of experiencing joy and suffering at a
similar level to a human. If such a system were to be sent to war and “die,”
he suggests this may not be morally
different to the case of a human who is
sent to war and dies, as the system was
human enough that it would not want
this to happen. Similarly, Schwitzgebel
notes that if a system was sent to war
against its will, this would be the moral
equivalent of creating slaves and sending them to war.
Says Schwitzgebel, “We haven’t
thought through carefully what sorts of
AI systems we need and don’t need to be
concerned about, and the differences
between them and us that would make
them morally different. Hypothetically,
an artificial being could be created with
moral rights and the capacities of a
person. This sort of AI will not be devel-
oped any time soon, but development
could go in this direction and should be
stopped short of getting there.”
Schwitzgebel cites more immediate
dangers of deploying autonomous in-
telligences in combat as loss of respon-
sibility and lack of predictability. The
loss of responsibility for autonomous
ants, and not shoot first and ask ques-
tions later. In some circumstances,
autonomous weapons could comply
better with international humanitar-
ian law than humans. But if weapons
can’t do as well as human fighters, they
should not be put in place, hence my
view on a moratorium.”
With countries including the U.S.,
U.K., China, Russia, and South Korea
developing autonomous weapons, and
the U.K. Ministry of Defence estimating
in 2011 that AI-based systems, as op-
posed to complex and clever automated
systems, could be achieved in five to 15
years and that fully autonomous swarms
of weapons such as drones could be
available by 2025, the Campaign to Stop
Killer Robots goes a step further than Ar-
kin in its call for a pre-emptive ban on all
autonomous weapons. It is pressing for
the ban to be enacted through the imple-
mentation of international legislation
or a new protocol under the Conven-
tion on Certain Conventional Weapons
(CCCW), the key U.N. vehicle promot-
ing disarmament, aiming to protect
military troops from inhumane injuries,
and seeking to prevent non-combatants
from accidentally being wounded or
killed by certain types of arms.
The most recent weapons to be excluded from warfare under the CCCW
treaty are blinding lasers, which were
banned in 1995.
The campaign defines three types of
robotic weapons: human-in-the-loop
weapons, robots that can select targets
and deliver force only with a human
command; human-on-the-loop weapons, robots that can select targets and
deliver force under the oversight of a
human operator who can override the
robots’ actions; and human-out-of
-the-loop weapons, robots that are capable
of selecting targets and delivering
force without any human input or interaction. While these definitions are
commonly used among developers of
AI-powered weapons, their definitive
meanings have yet to be agreed upon.
Reporting on a February 2016 round-
table discussion on autonomous weap-
ons, civilian safety, and regulation ver-
sus prohibition among AI and robotics
developers, Heather Roff, a research sci-
entist in the Global Security Initiative at
Arizona State University with research
interests in the ethics of emerging mili-
tary technologies, international hu-
manitarian law, humanitarian interven-
tion, and the responsibility to protect,
distinguishes automatic weapons from
autonomous weapons. She describes
sophisticated automatic weapons as
incapable of learning, or of changing
their goals, although their mobility and,
in some cases, autonomous navigation
capacities mean they could wreak havoc
on civilian populations and are most
likely to be used as anti-material, rather
than anti-personnel, weapons.
Roff describes initial autonomous
weapons as limited learning weapons
that are capable both of learning and
of changing their sub-goals while de-
ployed, saying, “Where sophisticated
automatic weapons are concerned,
governments must think carefully
about whether these weapons should
be deployed in complex environments.
States should institute regulations on
how they can be used. But truly au-
tonomous systems—limited learning
or even more sophisticated weapons—
ought to be banned. Their use would
carry enormous risk for civilians, might
escalate conflicts, and would likely pro-
voke an arms race in AI.”
Toby Walsh, professor of AI at the
University of New South Wales, Aus-
tralia, says, “There are many dangers
here, not only malevolence, but also
incompetence, systems designed by
those with malicious intent, or systems
that are badly made. Today, the mili-
tary could develop, sell, and use stupid
AI that hands responsibility to weap-
ons that can’t distinguish between ci-
vilians and combatants. The technol-
ogy is brittle and we don’t always know
“There are many
dangers here, not
only malevolence, but
also incompetence,
systems designed by
those with malicious
intent, or systems
that are made badly.”