to kill people independently.
“I think it’s pretty clear that military
mastery in the 21st century is going to
depend heavily on the skillful blending
of humans and intelligent machines,”
says John Arquilla, professor and chair
of Department of Defense Analysis at
the U.S. Naval Postgraduate School in
Monterey, CA. “It is no surprise that
many advanced militaries are investing
substantially in this field.”
Indeed, in late 2014, U.S. Secretary
of Defense Ash Carter unveiled the
country’s so-called “Third Offset” strat-
egy, essentially an attempt to offset the
shrinking U.S. military force by incor-
porating technologies that improve the
efficiency and effectiveness of weapons
systems. While many of the specific
aspects of the strategy are classified,
industry observers agree a key tenet
is increasing the level of autonomy in
weapons systems, which will improve
warfighting capability and reduce the
number of humans required to operate
weapons systems.
“For over a decade, we were the big
guy on the block,” explains Major Gen-
eral (Ret.) Robert H. Latiff, an adjunct
professor at the Reilly Center for Sci-
ence, Technology, and Values at the
University of Notre Dame, and Values
Research Professor and an adjunct pro-
fessor at George Mason University. “But
with the emergence of China, and the
reemergence of Russia, both of whom
are very, very technically capable, and
both of whom have invested fairly sig-
nificantly in these same technologies,
it goes without saying that the DoD (the
U.S. Department of Defense) feels like
they need to do this just to keep up.”
Military guidelines published by the
U.S. Department of Defense in 2012 do
not completely prohibit the develop-
ment and use of autonomous weap-
ons, but require Pentagon officials to
oversee their use. That is why human
rights groups such as the Campaign
to Stop Killer Robots are actively lob-
bying the international community to
impose a ban on the development and
use of autonomous weapons systems.
“We see a lot of investment happen-
ing in weapons systems with various
levels of autonomy in them, and that
was the whole reason why we decided
at Human Rights Watch back in 2012
to look at this,” explains Mary Ware-
ham, advocacy director of the Arms
Division of the Human Rights Watch,
and coordinator of the Campaign to
Stop Killer Robots. “We’re seeking a
preemptive ban on the development,
production, and use of fully autono-
mous weapons systems in the United
States and around the world.”
“We’re focusing quite narrowly on the
point at which critical functions of the
weapons system become autonomous,”
Wareham says. “The critical functions
that matter to us are the selection and
identification of the target, and the use of
force.” Wareham’s main issue is that the
fully autonomous weapons systems of
the future may rely solely on algorithms
to target and kill enemy targets, without
a human in the loop to verify the system
has made the right decision.
Human Rights Watch is pushing to
have a negotiated international treaty
limiting, restricting, or prohibiting the
use of autonomous weapons systems
written and ratified within the next
two or three years. “If this becomes
[a decade-long process], then we’re in
big trouble,” she admits, noting that at
present, there are no such treaties in
process within the international community. “At the moment, it is just talk,”
Wareham acknowledges.
A key argument of groups such as
Human Rights Watch is that these systems, driven by algorithms, may make
mistakes in target identification, or
may not be able to be recalled once deployed, even if the scenario changes.
Others with military experience point
out that focusing on the potential for
mistakes when using fully autono-
mous weapons systems ignores the re-
alities of warfighting.
“I think one of the problems in the
discourse is the objection that a ro-
bot might accidentally kill the wrong
person, or strike the wrong target,”
Arquilla says. “The way to address is
this is to point out that in a war, there
will always be accidents where the in-
nocents are killed. This has been true
for millennia, it is true now, and it is
true with all of the humans killed in
the Medecins Sans Frontieres hospital
in Afghanistan.”
Arquilla adds that while the use
of artificial intelligence in weapons
will not eliminate mistakes, “Autono-
mous weapons systems will make few-
er mistakes. They don’t get tired, they
don’t get angry and look for payback,
they don’t suffer from the motivated
and cognitive psychological biases
that often lead to error in complex
military environments.”
Indeed, even with today’s military
technologies, getting the military or
its contractors to discuss the exact al-
gorithms used to acquire, select, and
discharge a weapon is difficult, as dis-
closing this information would put
them at a distinct tactical disadvan-
tage. Therefore, even if a ban were to
be put in place, devising an inspections
system similar to those used for chemi-
cal and anti-personnel mines would be
extremely complicated.
“A ban is typically only as good as
the people who abide by it,” Latiff says,
noting that those who will sign and
fully abide by a ban make up “a pretty
small fraction of the rest of the world.”
In practice, he says, “When something
becomes illegal, everything just goes
underground. It’s almost a counter-
productive thing.”
Work on autonomous weapons
system has been going on for years,
and experts insist expecting militaries
to stop developing new weapons sys-
tems that might provide an advantage
is foolhardy and unrealistic. As such,
“There is absolutely an arms race in
autonomous systems underway,” Ar-
quilla says. “We see this in both coun-
tries that are American allies, and also
“We’re seeking a
preemptive ban on
the development,
production, and use
of fully autonomous
weapons systems in
the United States and
around the world.”