their creations inform users of their
limitations, and specifically warn users when they are asked to operate out
of their scope. AI systems must have
the ability to explain their reasoning
in a way that users can understand and
assent to. Because of their open-ended
behavior, AI systems are also inherently hard to verify. We must develop
software engineering techniques to
address this. Since AI systems are increasingly self-improving, we must
ensure these explanations, warnings,
and verifications keep pace with each
AI system’s evolving capabilities.
The concerns of Hawkings and others were addressed in an earlier
Communications Viewpoint by Dietterich
and Horvitz. 3 While downplaying these
concerns, Dietterich and Horvitz also
categorize the kinds of threats that AI
technology does pose. This apparent
paradox can be resolved by observing
that the various threats they identify
are caused by AI technology being too
dumb, not too smart.
AI systems are, of course, by no
means unique in having bugs or limited expertise. Any computer system
deployed in a safety or security critical situation potentially poses a threat
to health, privacy, finance, and other
realms. That is why our field is so concerned about program correctness and
the adoption of best software engineering practice. What is different about
AI systems is that some people may
have unrealistic expectations about
the scope of their expertise, simply because they exhibit intelligence—albeit
in a narrow domain.
The current focus on the very remote
threat of super-human intelligence is
obscuring this very real threat from subhuman intelligence.
But could such dumb machines
be sufficiently dangerous to pose a
threat to humanity? Yes, if, for in-
stance, we were stupid enough to al-
low a dumb machine the autonomy
to unleash weapons of mass destruc-
tion. We came close to such stupidity
with Ronald Reagan and Edward Tell-
er’s 1983 proposal of a Strategic De-
fense Initiative (SDI, aka ‘Star Wars’).b
Satellite-based sensors would detect
a Soviet ballistic missile launch and
super-powered x-ray lasers would
zap these missiles from space before
they got into orbit. Since this would
need to be accomplished within sec-
onds, no human could be in the loop.
I was among many computer scientists who successfully argued that the
most likely outcome was a false positive that would trigger the nuclear war
it was designed to prevent. There were
precedents from missile early-warn-ing systems that had been triggered
by, among other things, a moonrise
and a flock of geese. Fortunately, in
these systems a human was in the
loop to abort any unwarranted retaliation to the falsely suspected attack.
A group of us from Edinburgh met
U.K. Ministry of Defence scientists,
engaged with SDI, who admitted they
shared our analysis. The SDI was subsequently quietly dropped by morphing it into a saner program. This is an
excellent example of non-computer
scientists overestimating the abilities of dumb machines. One can only
hope that, like the U.K.’s MOD scientists, the developers of such weapon
systems have learned the institutional lesson from this fiasco. We all
also need to publicize these lessons
to ensure they are widely understood.
Similar problems arise in other areas
too, for example, the 2010 flash crash
demonstrated how vulnerable society was to the collapse of a financial
system run by secret, competing and
super-fast autonomous agents.
Another potential existential threat
is that AI systems may automate most
forms of human employment. 7, 9 If my
analysis is correct then, for the fore-
seeable future, this automation will
develop as a coalition of systems, each
of which will automate only a narrowly
defined task. It will be necessary for
these systems to work collaboratively,
with humans: orchestrating the co-
alition, recognizing when a system is
out of its depth and dealing with these
‘edge cases’ interactively. The produc-
tively of human workers will be, there-
by, dramatically increased and the cost
of the service provided by this multi-
agent approach will be dramatically re-
duced, perhaps leading to an increase
in the services provided. Whether this
will provide both job satisfaction and
a living income to all humans can cur-
rently only be an open question. It is
up to us to invent the future in which
it will do, and to ensure this future is
maintained as the capability and scope
of AI systems increases. I do not un-
derestimate the difficulty of achieving
this. The challenges are more political
and social than technical, so this is a
job for the whole of society.
As AI progresses, we will see even
more applications that are super-intelligent in a narrow area and incredibly dumb everywhere else. The
areas of successful application will
get gradually wider and the areas of
dumbness narrower, but not disappear. I believe this will remain true
even when we do have a deep understanding of human cognition. Maggie
Boden has a nice analogy with flight.
We do now understand how birds fly.
In principle, we could build ever more
accurate simulations of a bird, but this
would incur an increasingly exorbitant
cost and we already achieve satisfactory human flight by alternative means:
airplanes, helicopters, paragliders,
and so forth. Similarly, we will develop
a zoo of highly diverse AI machines,
each with a level of intelligence appropriate to its task—not a new uniform
race of general-purpose, super-intelligent, humanity supplanters.
1. Cellan-Jones, R. Stephen Hawking warns artificial
intelligence could end mankind. BBC Interview,
(Dec. 2014); http://www.bbc.co.uk/news/
2. Davis, E. and Marcus, G. Commonsense reasoning and
commonsense knowledge in artificial intelligence.
Commun. ACM 58, 9 (Sept. 2015), 92–103.
3. Dietterich, T.G. and Horvitz, E.J. Rise of concerns about
AI: Reflections and directions. Commun. ACM 58, 10
(Oct. 2015), 38–40.
4. Good, I.J. Speculations concerning the first ultraintelligent machine. Advances in Computers 6 (1965).
5. Kurzweil, R. The Singularity is Near. Penguin Group,
6. Sloman, A. Exploring design space and niche space. In
Proceeding of the 5th Scandinavian Conference on AI.
IOS Press, Amsterdam, 1995.
7. Susskind, R. and Susskind, D. The Future of the
Professions: How Technology Will Transform the Work
of Human Experts. OUP Oxford, 2015.
8. Ulam, S. Tribute to John von Neumann. Bulletin of the
American Mathematical Society 64, 3, part 2) (May
9. Vardi, M. Y. The future of work: but what will humans
do? Commun. ACM 58, 12 (Dec. 2015).
10. Warwick, K. March of The Machines. University of
Illinois Press, 2004.
Alan Bundy ( A.Bundy@ed.ac.uk) is Professor of
Automated Reasoning at the School of Informatics,
University of Edinburgh, Scotland.
Thanks to Stephan Schulz, Lucas Dixon, the St. Andrews
University Student Debating Society, and two anonymous
reviewers for feedback on earlier versions of this
Copyright held by author.