exceptions to a ban. Probably the use of
LAWS to protect refugee non-combatants would be embraced as an exception.
Whether the use of LAWS in a combat
zone where there are no non-combatants should be treated as an exception to
a ban would need to be debated. Offensive autonomous weapon systems that
do not target humans, but only target,
for example, unmanned submarines,
might be deemed an exception.
9. Utilize the unacceptable LAWS to
campaign for a broad ban, and a mechanism for adding future exceptions.
10. Demand that the onus of ensuring that LAWS will be controllable, and
that those who deploy the LAWS will be
held accountable, lies with those parties who petition for, and deploy, an
exception to the ban.
Unpredictable Behavior:
Why Some LAWS Must Be Banned
A ban will not succeed unless there is a
compelling argument for restricting at
least some, if not all, LAWS. In addition
to the ethical arguments for and against
LAWS, concern has been expressed that
autonomous weapons will occasionally
behave unpredictably and therefore
might violate IHL, even when this is not
the intention of those who deploy the
system. The ethical arguments against
LAWS have already received serious
attention over the past years and in
the ACM. During my testimony at the
CCW in April 2016, I fleshed out why
the prospect of unanticipated behavior should be taken seriously by member states. The points I made are fairly
well understood within the community
of AI and robotics’ engineers, and go
beyond weaponry to our ability to predict, test, verify, validate, and ensure
the behavior and reliability of software
and indeed any complex system. In addition, debugging and ensuring that
software is secure can be a costly and a
never-ending challenge.
Factors that influence a system’s pre-
dictability. Predictability for weaponry
means that within the task limits for
which the system is designed, the an-
ticipated behavior will be realized,
yielding the intended result. However,
nothing less than a law of physics is
absolutely predictable. There are only
degrees of predictability, which in the-
ory can be represented as a probability.
Many factors influence the predictabil-
ity of a system’s behavior, and whether
operators can properly anticipate the
system’s behavior.
˲ An unanticipated event, force, or
resistance can alter the behavior of
even highly predictable systems.
˲Many if not most autonomous
systems are best understood as com-
plex adaptive systems. Within systems
theory, complex adaptive systems act
unpredictably on occasion, have tip-
ping points that lead to fundamental
reorganization, and can even display
emergent properties that are difficult,
if not impossible, to explain.
˲ Complex adaptive systems fail for
a variety of reasons including incompe-
tence or wrongdoing; design flaws and
vulnerabilities; underestimating risks
and failure to plan for low probability
events; unforeseen high-impact events
(Black Swans; 12 and what Charles Per-
row characterized as uncontrollable
and unavoidable “normal accidents”
(discussed more fully here).
˲ Reasonable testing procedures will
not be exhaustive and can fail to ascer-
tain whether many complex adaptive
systems will behave in an uncertain
manner. Furthermore, the testing of
complex systems is costly and only af-
fordable by a few states, and they tend
to be under pressure to cut military
expenditures. To make matters worse,
each software error fixed and each new
feature added can alter a system’s be-
havior in ways that can require addi-
tional rounds of extensive testing. No
military can support the time and ex-
pense entailed in testing systems that
are continually being upgraded.
˲Learning systems can be even
more problematic. Each new task or
strategy learned can alter a system’s
behavior and performance. Further-
more, learning is not just a process of
adding and altering information; it can
alter the very algorithm that process-
es the information. Placing a system
on the battlefield that can change its
programming significantly raises the
risk of uncertain behavior. Retesting
dynamic systems that are constantly
learning is impossible.
˲ For some complex adaptive sys-
tems various mathematical proofs or
formal verification procedures have
been used to ensure appropriate be-
haviors. Existing approaches to formal
verification will not be adequate for
Calendar
of Events
May 6–11
CHI’17: CHI Conference on
Human Factors in Computing
Systems,
Denver, CO,
Sponsored: ACM/SIG,
Contact: Susan R. Fussell,
Email: sfussell@cornell.edu
May 8–10
HotOS ‘17: Workshop on Hot
Topics in Operating Systems,
Whistler, BC, Canada,
Sponsored: ACM/SIG,
Contact: Rachit Agarwal,
Email: ragarwal@cs.cornell.edu
May 10–12
GLSVLSI ‘17: Great Lakes
Symposium on VLSI 2017,
Banff, AB, Canada,
Contact: Laleh Behjat,
Email: laleh@ucalgary.ca
May 15–17
CF’17: Computing Frontiers
Conference,
Siena, Italy,
Sponsored: ACM/SIG,
Contact: Roberto Giorgi,
Email: giorgi@unisi.it
May 22–24
S YSTOR 2017:
International Systems
and Storage Conference,
Haifa, Israel,
Sponsored: ACM/SIG,
Contact: Doron Chen,
Email: cdoron@il.ibm.com
May 24–26
SIGSIM-PADS ‘17:
SIGSIM Principles of
Advanced Discrete Simulation
Singapore, Singapore,
Sponsored: ACM/SIG,
Contact: Wentong Cai,
Email: aswtcai@ntu.edu.sg
June
June 5–7
Web3D ‘17: The 22nd
International Conference
on Web3D Technology,
Brisbane, QLD, Australia,
Sponsored: ACM/SIG,
Contact: Matt Adcock,
Email: matt.adcock@csiro.au
June 5–8
ICMR ‘17: International
Conference on
Multimedia Retrieval,
Bucharest, Romania,
Sponsored: ACM/SIG,
Contact: Niculae Sebe,
Email: sebe@disi.unitn.it