machines is more powerful than either
one of them alone. The strongest chess
player today, for example, is neither a human, nor a computer, but a human team
using computers. This is also IBM’s cognitive computing vision, based on the
Watson technology that defeated the
human champions of “Jeopardy!” Today
IBM is seeking to deploy Watson cognitive computing services in various sectors. For example, a human doctor aided
by a Watson cognitive assistant would
be more effective in diagnosing and
treating diseases than either Watson or
the doctor working separately.
While human-machine cooperation
is a hopeful avenue to explore in the
short to medium term, it is not clear
how successful this will be, and by itself
it is not an adequate solution to the social issues that AI automation poses.
These constitute a major crisis of public
policy. To address this crisis effectively
requires that scientifically literate government planners work together with
computer scientists and technologists
in industry to alleviate the devastating
effects of rapid technological change on
the economy. The cohesion of the social
order depends upon an intelligent discussion of the nature of this change, and
the implementation of rational policies
to maximize its general social benefit.
1. Allen, P. and Greaves, M. The singularity isn’t near.
MIT Technology Review, 2011.
2. Amodel, D. et al. Concrete problems in AI safety.
3. Autor, D.H. Why are there still so many jobs? The
history and future of workplace automation. Journal
of Economic Perspectives 29, 3 (Mar. 2015).
4. Bengio, Y., LeCun, Y., and Hinton, G. Deep learning.
Nature 521, 2015.
5. Bostrom, N. Superintelligence: Paths, Dangers,
Strategies. Oxford University Press, 2014.
6. Brynjolfsson, E. and McAfee, A. The Second Machine
Age: Work, Progress, and Prosperity in a Time of
Brilliant Technologies. W. W. Norton and Co., 2016.
7. Ford, M. Rise of the Robots: Technology and the Threat
of a Jobless Future. Basic Books, 2015.
8. Good, I.J. Speculations concerning the first
ultraintelligent machine. In Franz L. Alt and Morris
Rubino, Eds., Advances in Computers. Academic
9. Haggstroom, O. Here Be Dragons. Oxford University
10. Markoff, J. Machines of Loving Grace: The Quest
for Common Ground Between Humans and Robots.
Harper and Collins, 2015.
11. Shanahan, M. The Technological Singularity. MIT
Press Essential Knowledge Series. MIT Press, 2015.
Devdatt Dubhashi ( email@example.com) is a professor
in the Department of Computer Science and Engineering
at Chalmers University of Technology, Sweden.
Shalom Lappin ( firstname.lastname@example.org) is a professor in
the Department of Philosophy, Linguistics and Theory of
Science at the University of Gothenburg, Sweden.
Copyright held by authors.
the Google DeepMind scientists agree
that the science and technology currently associated with such an approach
is in a thoroughly primitive state.
In fact much, if not all of the argument for existential risks from superintelligence seems to rest on mere logical
possibility. In principle it is possible that
superintelligent artificial agents could
evolve, and there is no logical inconsistency in assuming they will. However,
many other threats are also logically
possible, but two considerations are
always paramount in determining our
response: a good analysis and estimate
of the risk and a good understanding of
the underlying natural or technological
phenomena needed to formulate a response. What is the likelihood of super-intelligent agents of the kind Bostrom
and Haggstrom worry about? While it is
difficult to compute a meaningful estimate of the probability of the singularity,
the arguments here suggest to us that it
is exceedingly small, at least within the
foreseeable future, and this is the view of
most researchers at the forefront of AI research. AI technology in its current state
is also far from a mature state where
credible risk assessment is possible and
meaningful responses can be formulated. This can be contrasted with other areas of science and technology that pose
an existential threat, for example, climate change and CRISPR gene editing.
In these cases, we have a good enough
understanding of the science and technology to form credible (even quantitative) threat assessment and formulate
appropriate responses. Recent position
papers such as Amodel et al. 2 ground
concerns in real machine-learning research, and have initiated discussions
of practical ways for engineering AI systems that operate safely and reliably.
By contrast to superintelligent agents,
we are currently facing a very real and sub-
stantive threat from AI of an entirely dif-
ferent kind. Brynjolfsson and McAfee, 6
and Ford7 show that current AI technol-
ogy is automating a significant number
of jobs. This trend has been increas-
ing sharply in recent years, and it now
threatens highly educated profession-
als from accountants to medical and
legal consultants. Various reports have
estimated that up to 50% of jobs in west-
ern economies like the U.S. and Sweden
could be eliminated through automa-
tion over the next few decades. As Bryn-
jolfsson and McAfee note toward the
end of their book, the rise of AI-driven
automation will greatly exacerbate the
already acute disparity in wealth be-
tween those who design, build, market,
and own these systems on one hand,
and the remainder of the population
on the other. Reports presented at the
recent WEF summit in Davos make
similar predictions. Governments and
public planners have not developed
plausible programs for dealing with the
massive social upheaval that such eco-
nomic dislocation is likely to cause.
A frequently mentioned objection
to this concern is that while new tech-
nologies can destroy some jobs, they
also create new jobs that absorb the
displaced workforce. This is how it
has always been in the past. So for ex-
ample, unemployed agricultural work-
ers eventually found jobs in factories.
So why should this time be different?
Brynjolfsson and McAfee argue that
information technologies like AI are
different from previous technologies
in being general-purpose technologies
that have a pervasive impact across
many different parts of the economy.
Brynjolfsson and McAfee and Ford ar-
gue that no form of employment is im-
mune to automation by intelligent AI
systems. MIT economist David Autor
points to deep and long-term structur-
al changes in the economy as a direct
result of these technologies. 3
One way in which AI-powered sys-
tems can improve production and ser-
vices while avoiding massive unemploy-
ment is through a partnership of people
and machines, a theme running through
John Markoff’s book. 10 He points out
that the combination of humans and
Much, if not all of
the argument for
existential risks from
seems to rest
on mere logical