systems with learning or planning capabilities functioning in complex socio-technical contexts. However, new
formal verification procedures may be
developed. The success of these will be
an empirical question, but ultimately
political leaders and military planners
must judge whether such approaches
are adequate for ensuring that LAWS
will act within the constraints of IHL.
˲ While increasing autonomy, improving intelligence, and machine
learning can boost the system’s accuracy in performing certain tasks; they
can also increase the unpredictability
in how a system performs overall.
˲Unpredictable behavior from a
weapon system will not necessarily be
lethal. But even a low-risk autonomous
weapon will occasionally kill non-combatants, start a new conflict, or escalate hostilities.
Coordination, Normal Accidents, and
Trust. Military planners often underes-
timate the risks and costs entailed in
implementing weapon systems. Anal-
yses often presume a high degree of
reliability in the equipment deployed,
and ease at integrating that equipment
into a combat unit. Even autonomous
weapons will function as components
within a team that will include humans
fulfilling a variety of roles, other me-
chanical or computational systems,
and an adequate supply chain serving
combat and non-combat needs.
Periodic failures or system accidents
are inevitable for extremely complex
systems. Charles Perrow labeled such
failures “normal accidents.” 8 The near
meltdown of a nuclear reactor at Three
Mile Island in Pennsylvania on March
28, 1979, is a classical example of a nor-
mal accident. Normal accidents will
occur even when no one does anything
wrong. Or they can occur in a joint cog-
nitive system—where both operators
and software are selecting courses of
action—when it is impossible for the
operators to know the appropriate ac-
tion to take in response to an unan-
ticipated event or action by a compu-
tational system. In the latter case, the
operators do the wrong thing, because
they misunderstand what the semi-in-
telligent system is trying to do. This was
the case on December 6, 1999, when
after a successful landing, confusion
reigned, and a Global Hawk unmanned
air vehicle veered off the runway and its
nose collapsed in the adjacent desert,
incurring $5.3 million in damages. 7
In a joint cognitive system, when
anything goes wrong, the humans are
usually judged to be at fault. This is
largely because of assumptions that
the actions of the system are automated, while humans are presumed
to be the adaptive players on the team.
A commonly proposed solution to the
failure of a joint cognitive system will
be to build more autonomy into the
computational system. This strategy,
however, does not solve the problem.
It becomes ever more challenging for
a human operator to anticipate the actions of a smart system, as the system
and the environments in which it operates become more complex. Expecting
operators to understand how a sophisticated computer thinks, and to anticipate its actions so as to coordinate the
activities of the team, increases the responsibility of the operators.
Difficulty anticipating the actions
of other team members (human or
computational) in turn undermines
trust, an essential and often overlooked element of military preparedness. Heather Roff and David Danks
The Digital Mind
How Science Is Redefining Humanity
Arlindo Oliveira
“This book is a delightful romp through computer science, biology, physics, and much
else, all unified by the question: What is the
future of intelligence? Few books this ambitious manage to pull it off, but this one does.”
—Pedro Domingos, Professor of Computer
Science, University of Washington; author of
The Master Algorithm
Common Sense, the Turing
Test, and the Quest for Real AI
Hector J. Levesque
“Provides a lucid and highly insightful account
of the remaining research challenges facing
AI, arguing persuasively that common sense
reasoning remains an open problem and lies at
the core of the versatility of human intelligence.”
—Bart Selman, Professor of Computer
Science, Cornell University
The MIT Press
What Algorithms Want
Imagination in the Age of Computing
Ed Finn
“This is a brilliant and important work. I know
of no other book that so ably describes the
cultural work that algorithms do. Once you
read this you won’t think of algorithms as mere
batches of code that guide processes. You will
see them as actors in the world.”
—Siva Vaidhyanathan, author of The
Googlization of Everything: (And Why We
Should Worry)
mitpress.mit.edu