ingful human control in the form of
real-time human authorization to kill
will help slow the pace of combat, but
will not stop the desire for increasingly sophisticated weaponry that could
potentially be used autonomously.
In spite of recent analyses suggesting
that humanity has become less violent
over several millennia, 9 warfare itself is
an evil humanity has been unsuccessful
at quelling. However, if we are to survive and evolve as a species some limits
must be set on the ever more destructive and escalating weaponry technology affords. The nuclear arms race has
already made clear the dangers inherent in surrendering to the inevitability
of technological possibility.
Arms control will never be a simple
matter. Nevertheless, we must slowly,
effectively, and deliberately put a cap
on inhumane weaponry and methods
as we struggle to transcend the scourge
1. Arkin, R. The case for banning killer robots: Counterpoint.
Commun. ACM 58, 12 (Dec. 2015), 46–47.
2. Arkin, R. Governing Lethal Behavior in Autonomous
Systems. CRC Press, Boca Raton, FL, 2009.
3. Asaro, P. On Banning Autonomous Lethal Systems:
Human Rights, Automation and the Dehumanizing
of Lethal Decision-making. Special Issue on New
Technologies and Warfare. International Review of the
Red Cross 94, 886 (Summer 2012), 687–709.
4. Carpenter, C. How do Americans feel about fully
autonomous weapons? The Duck of Minerva (June 19,
5. Gormley, D.M. Missile Contagion. Praeger Security
6. Gubrud, M. and Altmann, J. Compliance Measures for
an Autonomous Weapons Convention, ICRAC Working
Paper Series #2, International Committee for Robot
Arms Control (2013); http://bit.ly/2nf0LFu
7. Peck, M. Global hawk crashes: Who’s to blame?
National Defense 87, 594 (2003); http://bit.ly/2mQJgeJ
8. Perrow, C. Normal Accidents: Living With High-Risk
Technologies. Basic Books, New York, 1984.
9. Pinker, S. The Better Angels of Our Nature: Why
Violence Has Declined. Penguin, 2011.
10. Roff, H. and Danks, D. Trust but Verify: The difficulty
of trusting autonomous weapons systems. Journal of
Military Ethics. (Forthcoming).
11. Sagan, S.D. The Limits of Safety: Organizations,
Accidents, and Nuclear Weapons. Princeton University
Press, Princeton, NJ, 2013.
12. Taleb, N.N. The Black Swan: The Impact of the Highly
Improbable. Random House, 2007.
13. Wallach, W. Terminating the Terminator. Science
Progress, 2013; http://bit.ly/2mjl2dy
14. Wallach, W. and Allen, C. Framing robot arms control.
Ethics and Information Technology 15, 2 (2013), 125–135.
Wendell Wallach ( email@example.com) is a Senior
Advisor to The Hastings Center and Chairs Technology
and Ethics Studies at the Yale University Interdisciplinary
Center for Bioethics. His latest book is A Dangerous
Master: How to Keep Technology from Slipping Beyond
The author would like to express appreciation to
the five anonymous reviewers, whose comments
contributed significantly to help clarify the arguments
made in this Viewpoint.
Copyright held by author.
and believe that intelligent computational systems are becoming more
than mere machines. That prospect,
however, should not blind us to the
opportunity to limit their destructive
impact. If and when robots become
ethical actors that can be held responsible for their actions, we can then
begin debating whether they are no
longer machines and are deserving of
some form of legal personhood.
The short-term benefits of LAWS could
be far outweighed by long-term consequences. For example, a robot arms
race would not only lower the barrier to
accidentally or intentionally start new
wars, but could also result in a pace of
combat that exceeds human response
time and the reflective decision-making capabilities of commanders. Small
low-cost drone swarms could turn battlefields into zones unfit for humans.
The pace of warfare could escalate
beyond meaningful human control.
Military leaders and soldiers alike are
rightfully concerned that military service will be expunged of any virtue.
In concert with the compelling legal
and ethical considerations LAWS pose
for IHL, unpredictability and risk concerns suggest the need for a broad prohibition. To be sure, even with a ban,
bad actors will find LAWS relatively
easy to assemble, camouflage, and deploy. The Great Powers, if they so desire, will find it easy to mask whether
a weapon system has the capability of
The difficulties in effectively enforcing a ban are perhaps the greatest
barrier to be overcome in persuading
states that LAWS are unacceptable.
People and states under threat perceive advanced weaponry as essential for their immediate survival. The
stakes are high. No one wants to be at
a disadvantage in combating a foe that
violates a ban. And yet, violations of
the ban against the use of biological
and chemical weapons by regimes in
Iraq and in Syria have not caused other
states to adopt these weapons.
The power of a ban goes beyond
whether it can be absolutely enforced.
The development and use of biologi-
cal and chemical weapons by Saddam
Hussein helped justify the condem-
nation of the regime and the eventual
invasion of Iraq. Chemical weapons
use by Bashar al-Assad has been widely
condemned, even if the geopolitics of
the Syrian conflict have undermined
effective follow-through in support of
A ban on LAWS is likely to be violated even more than that on biological and chemical weapons. Nevertheless, a ban makes it clear that such
weapons are unacceptable and those
using them are deserving of condemnation. Whenever possible that
condemnation should be accompanied by political, economic, and even
military measures that punish the offenders. More importantly, a ban will
help slow, if not stop, an autonomous
weapons arms race. But most importantly, banning LAWS will function as
a moral signal that international humanitarian law (IHL) retains its normative force within the international
community. Technological possibilities will not and should not succeed in
pressuring the international community to sacrifice, or even compromise,
the standards set by IHL.
A ban will serve to inhibit the unrestrained commercial development
and sale of LAWS technology. But a
preemptive ban on LAWS will not stop
nor necessarily slow the roboticization
of warfare. Arms manufacturers will
still be able to integrate ever-advancing
features into the robotic weaponry they
develop. At best, it will require that a
human in the loop provides a real-time
authorization before a weapon system
kills or destroys a target that may harbor soldiers and noncombatants alike.
Even a modest ban signals a moral
victory, and will help ensure that the
development of AI is pursued in a
truly beneficial, robust, safe, and controllable manner. Requiring mean-
benefits of LAWS
could be far