and regulatory levels.m The second
is that while function-based systems
have been an enabling and positive
development, we do need to be acutely aware of the reasons behind their
success to better understand the implications. A key finding here is that
some tasks in perception and cognition can be emulated to a reasonable
extent without having to understand
or formalize these tasks as originally
believed and sought, as in some text,
speech, and vision applications. That
is, we succeeded in these applications by having circumvented certain
technical challenges instead of having solved them directly.n This observation is not meant to discount
current success but to highlight its
nature and lay the grounds for this
question: How far can we go with this
direction? I revisit this issue later in
the article.
Human-Level or Animal-Level?
Let me now get to the thoughts that
triggered the title of this article in
the first place. I believe human-level
intelligence is not required for the
tasks currently conquered by neural
networks, as such tasks barely rise
to the level of abilities possessed by
many animals. Judea Pearl cited eagles and snakes as having vision systems that surpass what we can build
today. Cats have navigation abilities
that are far superior to any of those
in existing automatous-navigation
systems, including self-driving cars.
Dogs can recognize and react to hu-
m Eric Horvitz of Microsoft Research brought
up the idea of subjecting certain AI systems to
trials as is done to approve drugs. The proper
labeling of certain AI systems should also be
considered, also as is done with drugs. For
example, it has been suggested that the term
“self-driving car” is perhaps responsible for
the misuse of this AI-based technology by
some drivers who expect more from the technology than is currently warranted.
n For example, one can now use learned func-
tions to recognize cats in images without
having to describe or model what a cat is, as
originally thought and sought, by simply fitting
a function based on labeled data of the form:
(image, cat), (image, not cat). While this approach works better than modeling a cat (for
now), it does not entail success in “learning”
what a cat is, to the point where one can recognize, say, deformed images of cats or infer aspects of cats that are not relayed in the
training dataset.
the camera of a self-driving car (the vulnerability of these systems to mistakes
remains controversial in both its scope
and how to deal with it at the policy and
regulatory levels).
The significance of these observations stems from their bearing on our
ability to forecast the future and decisions as to what research to invest in.
In particular, does the success in addressing these selected tasks, which
are driven by circumscribed commercial applications, justify the worry
about doomsday scenarios? Does it
justify claims that AI-based systems
can now comprehend language or
speech or do vision at the levels that
humans do? Does it justify this current imbalance of attitudes toward
various machine learning and AI approaches? If you work for a company
that has an interest in such an application, then the answer is perhaps,
and justifiably, yes. But, if you are concerned with scientific inquiry and understanding intelligence more broadly, then the answer is hopefully no.
In summary, what has just happened in AI is nothing close to a breakthrough that justifies worrying about
doomsday scenarios. What just happened is the successful employment
of AI technology in some widespread
applications, aided greatly by developments in related fields, and by new
modes of operation that can tolerate
lack of robustness or intelligence.
Put another way—and in response to
headlines I see today, like “AI Has Arrived” and “I Didn’t See AI Coming”—
AI has not yet arrived according to the
early objective of capturing intelligent behavior. What really has arrived
are numerous applications that can
benefit from improved AI techniques
that still fall short of AI ambitions but
are good enough to be capitalized on
commercially. This by itself is positive, until we confuse it with something else.
Let me close this section by
stressing two points: The first is
to reemphasize an earlier observation that while current AI technology is still quite limited, the impact
it may have on automation, and
hence society, may be substantial
(such as in jobs and safety). This
in turn calls for profound treatments at the technological, policy,
We succeeded in
these applications
by having
circumvented
certain technical
challenges instead
of having solved
them directly.