Vviewpoints
I
M
A
G
E
B
Y
A
N
T
O
N
K
H
R
U
P
I
N
DOI: 10.1145/3055277
mans actually do not understand intelligence very well in the first place.
Now, computer scientists often think
they understand intelligence because
they have so often been the “smart”
kid, but that’s got very little to do with
understanding what intelligence actually is. In the absence of a clear understanding of how the human brain
generates and evaluates ideas, which
may or may not be a good basis for
the concept of intelligence, we have
introduced numerous proxies for intelligence, the first of which is game-playing behavior.
One of the early challenges in AI—
and for the moment I am talking about
AI in the large, not soft or weak or any
other marketing buzzword—was to
get a computer to play chess. Now,
why would a bunch of computer scientists want to get a computer to play
Dear KV,
Our company is looking at handing
much of our analytics to a company
that claims to use “Soft AI” to get answers to questions about the data we
have collected via our online sales system. I have been asked by management
to evaluate this solution, and throughout the evaluation all I can see is that
this company has put a slick interface
on top of a pretty standard set of analytical models. I think what they really mean to say is “Weak AI” and that
they’re using the term Soft so they can
trademark it. What is the real difference between soft (or weak) AI and AI
in general?
Feeling Artificially Dumb
Dear AD,
The topic of AI hits the news about every 10 to 20 years, whenever a new level
of computing performance becomes
so broadly deployed as to enable some
new type of application. In the 1980s it
was all about expert systems. Now we
see advances in remote control (such
as military drones) and statistical number crunching (search engines, voice
menus, and the like).
The idea of artificial intelligence
is no longer new, and, in fact, the
thought that we would like to meet
and interact with non-humans has existed in fiction for hundreds of years.
Ideas about AI that have come out
of the 20th century have some well-known sources—including the writings of Alan Turing and Isaac Asimov.
Turing’s scientific work generated the
now-famous Turing test, by which a
machine intelligence would be judged
against a human one; and Asimov’s
fiction gave us the Three Laws of Ro-
botics, ethical rules that were to be
coded into the lowest-level software
of robotic brains. The effects of the
latter on modern culture, both tech-
nological and popular, are easy to
gauge, since newspapers still discuss
advances in computing with respect
to the three laws. The Turing test is,
of course, known to anyone involved
in computing, perhaps better known
than the halting problem (https://
en.wikipedia.org/wiki/Halting_prob-
lem), much to the chagrin of those of
us who deal with people wanting to
write “compiler-checking compilers.”
The problem inherent in almost
all nonspecialist work in AI is that hu-
Kode Vicious
The Chess Player Who
Couldn’t Pass the Salt
AI: Soft and hard, weak and strong, narrow and general.