N proportional to n, and approaches 0
for any fixed n if N approaches infinity.
However, in 1982 Cook and Dwork1
provided an Ω(log n) lower bound for
finding the maximum of n integers allowing infinitely many processors of
any parallel random-access machine
(PRAM) without simultaneous writes.
In 1985 Fich et al.
2 proved an Ω(log
log n) lower bound for the same problem under the priority model, where
the processor with the highest priority is allowed to write in case of a write
conflict. The priority model is the
strongest PRAM model allowing simultaneous writes.
Nevertheless, Denning and Lewis were right that Amdahl’s Law is
flawed. Contrary to Amdahl’s assumption, it has already been demonstrated (though without reference
to Amdahl) that, in theory, no inherently sequential computations exist.
Even though sequential computations (such as sequential concurrent
objects and Lamport’s bakery algorithm) may appear in concurrent systems, they have negligible effect on
speedup if the growth rate of the parallel fraction is higher than that of the
1. Cook, S.A. and Dwork, C. Bounds on the time for
parallel RAM’s to compute simple functions. In
Proceedings of the 14th Annual ACM Symposium on
Theory of Computing (San Francisco, CA, May 5–7).
ACM Press, New York, 1982, 231–233.
2. Fich, F.E., Meyer auf der Heide, F., Ragde, P., and
Wigderson, A. One, two, three ... infinity: Lower bounds
for parallel computation. In Proceedings of the 17th
Annual ACM Symposium on Theory of Computing
(Providence, RI, May 6–8). ACM Press, New York,
Ferenc (Frank) Dévai, London, U.K.
Gustafson’s Law gives bounds on data-parallel processing whereby the same
operation(s) is applied in parallel to
different data elements. The standard
serial algorithm for finding the max of n
elements cannot thus be parallelized, and
Amdahl’s Law says no speedup. However,
finding the max can be computed in
parallel by applying the compare
ALAN BUNDY’S VIEWPOINT “Smart Machines Are Not a Threat to Humanity” (Feb. 2017) was too limited in light of the recent accomplishments of artificial intelligence (AI).
Reducing the entire field of AI to four
“successful AI systems”—DeepBlue,
Tartan Racing, Watson, and AlphaGo—
does not give the full picture of the impact of AI on humanity. Recent advances in pattern recognition, due mainly
to deep learning, for computer vision
and speech recognition have achieved
benchmarks comparable to human
2 consider AI technologies
power surveillance systems, as well as
Apple’s Siri and Amazon’s Echo personal assistants. Looking at such AI algorithms one can imagine AI general intelligence being possible throughout our
communication networks, computer
interfaces, and tens of millions of Internet of Things devices in the near future.
Toward this end, Deepmind Technologies Ltd. (acquired by Google in 2014)
created a game-playing program combining deep learning and reinforcement learning that sees the board, as
well as moves the pieces on the board.
Recent advances in generative adver-sarial learning will reduce reliance on
labeled data (and the humans who do
the labeling) toward machine-learning
software capable of self-improvement.
It is not because four well-known
AI applications are narrowly focused
by design that smart machines are not
a threat to humanity. This is a false
premise. Smart machines are a threat
to humanity in indirect ways. Intelligence runs deep.
1. Mnih, V. et al. Human-level control through deep
reinforcement learning. Nature 518 (Feb. 26, 2015),
2. Xiong, W. et al. Achieving human parity in
conversational speech recognition. arXiv (Feb. 17,
Myriam Abramson, Arlington, VA
Abramson misses my point. AI systems
are not just narrowly focused by design,
because we have yet to accomplish
artificial general intelligence, a goal that
still looks distant. The four examples I
included are the ones most often cited
to illustrate AI progress. Her reaction
illustrates my main point—that it is all
too easy to erroneously extrapolate
from spectacular success in a narrow
area to general success elsewhere.
That leads to real danger to humans,
but not to humanity.
Alan Bundy, Edinburgh, Scotland
Gustafson’s Law Contradicts
The article “Exponential Laws of Computing Growth” by Peter J. Denning
and Ted G. Lewis (Jan. 2017) cited Gustafson’s Law (from John L. Gustafson’s
“Reevaluating Amdahl’s Law,” May
1988) to refute Amdahl’s Law. Unfortunately, Gustafson’s Law itself contradicts established theoretical results.
Both Amdahl and Gustafson claimed
to quantify the speedup t1/tN achievable
by N processors, where N > 1, t1 is the
time required to solve a computational
problem by using one processor, and tN
is the time required to solve the same
problem using N processors. Gustafson, in an attempt to interpret experimental results, said
t1/tN = s + N( 1 – s),
where s, 0 < s < 1, is the proportion of
time spent by a single processor on serial
parts of the program. Gustafson claimed
this equation should replace Amdahl’s
Law as the general speedup rule for parallel and distributed computing.
The sequential running time for
finding the maximum of n integers
t1(n) ≤ cn, accounting for n – 1 comparisons and, in the worst case, n – 1
assignments, where c is a positive constant. Based on Gustafson’s equation,
the time to find the maximum of n integers by using N processors would be
tN(n) ≤ cn/(s + N( 1 – s)),
which is bounded by a constant for any
Consider Indirect Threats of AI, Too