Do not be misled by demonstrations: they are often misleading because the demonstrator avoids any
situations where the “AI” fails. Computers can do many things better
than people. Humans have evolved
through a sequence of slight improvements that need not lead to an optimal design. “Natural” methods have
evolved to use our limited sensors and
actuators. Modern computer systems
use powerful sensors and remote actuators, and can apply mathematical
methods that are not practical for humans. It seems very unlikely that human methods are the best methods
When Alan Turing rejected “Can
machines think?” as unscientific,
and described a different question
to illustrate what he meant by “
scientific,” he was right but misled us.
Researchers working on his “
replacement question” are wasting their time
and, very often, public resources. We
don’t need machines that simulate
people. We need machines that do
things that people can’t do, won’t do,
or don’t do well.
Instead of asking “Can a computer
win Turing’s imitation game?” we
should be studying more specific questions such as “Can a computer system
safely control the speed of a car when
following another car?” There are
many interesting, useful, and scientific questions about computer capabilities. “Can machines think?” and
“Is this program intelligent?” are not
Verifiable algorithms are preferable
to heuristics. Devices that use heuristics to create the illusion of intelligence
present a risk we should not accept.
1. Turing, A.M. Computing machinery and intelligence.
Mind 59 (1950), 433–460.
2. Weizenbaum, J. Automating psychotherapy. ACM
Forum Letter to the Editor. Commun. ACM 17, 7 (July
1974), 425; doi: 10.1145/361011.361081.
3. Weizenbaum, J. ELIZA—A computer program for the
study of natural language communication between
man and machine. Commun. ACM 9, 1 (Jan. 1966),
36–45; doi: 10.1145/365153.365168.
David Lorge Parnas works for Middle Road Software, Inc.,
in Ottawa, Canada. He is Professor Emeritus at McMaster
University in Canada and the University of Limerick in
Lillian Chik-Parnas, Nancy Leveson, Peter Denning, and
Peter Neumann offered helpful suggestions about earlier
drafts of this column.
Copyright held by author.
solution. They may also err because of
incomplete or biased experience. Learning can be viewed as a restricted form of
statistical classification, mathematics
that is well developed. Machine-learning
algorithms are heuristic and may fail in
When people view computers as thinking or sentient beings, ethical issues
arise. Ethicists traditionally asked if the
use of some device would be ethical;
Now, many people discuss our ethical obligations to AIs and whether AIs
will treat us ethically. Sometimes ethicists posit situations in which AI must
choose between two actions with unpleasant consequences, and ask what
the device should do. Because people in
the same situation would have the same
issues, these dilemmas were discussed
long before computers existed. Others
discuss whether we are allowed to damage an AI. These questions distract us
from the real question, “Is the machine
trustworthy enough to be used?”
The AI research community exploits
the way that words change meaning:
the community’s use of the word “
robot” is an example. “Robot” began as
a Czech word in Karel Čapek’s play,
R. U. R. (Rossum’s Universal Robots).
Čapek’s robots were humanoids, almost indistinguishable from human
beings, and acted like humans. If “
robot” is used with this meaning, building robots is challenging. However, the
word “robot” is now used in connection
with vacuum cleaners, bomb-disposal
devices, flying drones, and basic factory
automation. Many claim to be building
robots even though devices remotely
like Karel Čapek’s are nowhere in sight.
This wordplay adds an aura of wizardry
and distracts us from examining the actual mechanism to see if it is trustworthy. Today’s “robots” are machines that
can, and should, be evaluated as such.
When discussing AI, it is important to
demand precise definitions.
AI: Creating Illusions
Alan Perlis referred to AI researchers as
“illusionists” because they try to create
the illusion of intelligence. He argued
they should be considered stage magi-
cians rather than scientists. Dupchak
and Weizenbaum demonstrated it is
easy to create the illusion of intelligence.
We do not want computer systems
that perform tricks; we need trustworthy tools. Trustworthy systems
must be based on sound mathematics and science, not heuristics or illusionist’s tricks.
Whenever developers talk about AI,
ask questions. Although “AI” has no
generally accepted definition, it may
mean something specific to them.
The term “AI” obscures the actual
mechanism but, while it often hides
sloppy and untrustworthy methods,
it might be concealing a sound mechanism. An AI might be using sound
logic with accurate information, or
it could be applying statistical inference using data of doubtful provenance. It might be a well-structured
algorithm that can be shown to work
correctly, or it could be a set of heuristics with unknown limitations. We
cannot trust a device unless we know
how it works.
AI methods are least risky when it
is acceptable to get an incorrect result
or no result at all. If you are prepared
to accept “I don’t understand” or an irrelevant answer from a “personal assistant,” AI is harmless. If the response is
important, be hesitant about using AI.
Some AI programs almost always
work and are dangerous because we
learn to depend on them. A failure may
go undetected; even if failures are detected, users may not be prepared to
proceed without the device.
talk about AI,
has no generally
it may mean