can do all human mental tasks better
than today’s humans can?
First, let’s clarify a misconception
about the IBM computer that beat
chess grandmaster Garry Kasparov
in 1997. It was not a neural network.
It was a program to search and evaluate board positions much faster than
Kasparov. In effect, the human neural
network called “Kasparov” was beaten by the IBM computer using smart
search algorithms. Kasparov bounced
back with Advanced Chess in which
humans assisted by computers played
matches; the human teams often beat
The neural network AlphaGo
won four of five Go games against
the world champion Le Se-dol. According to DeepMind researcher
David Silver, “The most important idea in AlphaGo Zero is that it
learns completely tabula rasa, that
means it starts completely from a
blank slate, and figures out for itself
only from self-play and without any
human knowledge.”a AlphaGo Zero
contains four CPUs and a single
neural network and software that
initially know nothing about Go or
any other game. It learned to play Go
without supervision by simply playing against itself. The results look
“alien” to humans because they are
often completely new: AlphaGo creates moves that humans have not
discovered in more than 2,500 years
of playing Go.
We think this is a development of
singular significance. The time-honored method of neural networks learning by being trained from training sets
of data can in some cases be replaced
by machines learning from each other
without any training data.
So far, there is little threat from
these networks becoming super-intelligent machines. The AlphaGo
experience happened in a game with
well-defined rules about allowable
moves and a well-defined metric
defining the winner. The game was
played in a well-defined mathematical space. Presumably this training
method could be extended to swarms
of robots attacking and defending.
But could it master a sport like basketball? Playing a violin? And what
about games the purpose of which is
to continue rather than to win?
Q: Where do you see this going, next?
The 10–15 year roadmap is pretty
clear. There is now much theory behind neural networks. Even much of
the software is becoming off-the-shelf
through open source such as Google’s
TensorFlow and Nvidia’s CUDA tools.
The next breakthrough is likely to be
in hardware. Cheaper and faster hardware will pave the way for consumer-level products that fit in a smartphone
or drive a car.
Hardware is already trending toward chip sets with massively parallel neural networks built in. The
Von Neumann architecture, long
criticized for its processor-memory
bottleneck, is giving way to process-ing-in-memory machines where
simulated neural network nodes
are embedded in memory. Imagine
random access memory in small
blocks, each encapsulated in a node
of a neural network. Such networks
will perform big-data analytics,
recognition, and recommending
without needing the full power of
a general-purpose arithmetic logic
unit. Who knows what will emerge
in the worldwide network of interconnected devices bearing power
neural network chips?
1. Cybenko, G. Approximation by superpositions of a
Sigmoid function. Math. Control Signals Systems 2,
2. Hopfield, J. and Tank, D. Neural computation of
decisions in optimization problems. Biological
Cybernetics 52 (1985), 141–152.
3. Hornik, K., Stinchcombe, M., and White, H. Multilayer
feedforward networks are universal approximators.
Neural Networks 3, 2 (1989), 359–366.
4. McCulloch, W.S. and Pitts, W. A logical calculus of
the ideas immanent in nervous activity. Bulletin of
Mathematical Biophysics 5, 115 (1943); https://doi.
5. Minsky, M. and Papert, S. Perceptrons. MI T Press, 1969.
Ted G. Lewis ( firstname.lastname@example.org) is an author
and consultant with more than 30 books on computing
and hi-tech business, a retired professor of computer
science, most recently at the Naval Postgraduate School,
Monterey, CA, Fortune 500 executive, and the co-founder
of the Center for Homeland Defense and Security at the
Naval Postgraduate School, Monterey, CA.
Peter J. Denning ( email@example.com) is Distinguished
Professor of Computer Science and Director of the
Cebrowski Institute for information innovation at the
Naval Postgraduate School in Monterey, CA, is Editor of
ACM Ubiquity, and is a past president of ACM. The author’s
views expressed here are not necessarily those of his
employer or the U.S. federal government.
Copyright held by authors.
VRCAI ‘18: International
Conference on Virtual Reality
Continuum and Its Applications
Contact: Koji Mikami,
CoNEX T ‘18: The 14th
on Emerging Networking
EXperiments and Technologies,
Contact: Alberto Dainotti,
Contact: Guillaume Pierre,
CVMP ‘18: European
Conference on Visual
London, U. K.,
Contact: Abhijeet Ghosh,
AFIRM ‘19: ACM SIGIR/SIGKDD
African Workshop on
Machine Learning for
Data Mining and Search,
Cape Town, South Africa
Contact: Hussein Suleman,
FAT* ‘19: Conference on
Contact: danah boyd,