Milestones | DOI:10.1145/2184319.2184327
Neil Savage
Game Changer
Judea Pearl’s passionate advocacy of the importance of probability
and causality helped revolutionize artificial intelligence.
WHEN JUDEA PEARL spoke about reasoning with probability at the first conference on Un- certainty in Artificial
Intelligence in Los Angeles in 1985,
the attendees who shared his outlook were seen as being outside the
mainstream of artificial intelligence,
recalls Eric Horvitz, distinguished researcher at Microsoft Research. But
Pearl, winner of the 2011 ACM A.M.
Turing Award, turned out to be one of
the leaders of a revolution that “
literally has changed the very nature of the
discipline of artificial intelligence and
computer science more generally,”
says Horvitz. (An interview with Pearl,
“A Sure Thing,” appears on p. 136.)
In the early 1980s, expert systems
were based on sets of rules, but such
reasoning systems were brittle, says
Pearl, and when they provided an incorrect answer, it was not easy to figure out
which of the rules was the culprit. Pearl
felt strongly that humans reasoned
using a probabilistic engine in their
minds, but probability had fallen out of
favor because applying a standard probability distribution to the number of
variables involved made it exponentially complex. Yet probability was not too
complex for humans. “If we do it simply,” says Pearl, “computers ought to be
able to do it simply, which means there
ought to be a practical approximation.”
PHO TOGRAPH BY RICHARD MORGENSTEIN
Taking a cue from psychological
theories of how children learn to read,
Pearl developed Bayesian networks,
graphical models of how beliefs can
propagate in response to new observations. The algorithms he developed combined graph theory and
probability theory to make the system simple enough that computers
could reason probabilistically. “He
basically used graph theory to characterize how you could represent conditional independence,” says Joseph
Halpern, a professor of computer
science at Cornell University.
“When you see
a phenomenon
exhibited by humans,
it must be that
computers are able
to simulate it.”
However, Pearl soon believed that
humans do not have a “probability engine” in their minds, but instead reason based on an understanding of
cause and effect. That leads to the ability to ask the question “What if?” If you
do A, the result should be B, and you
can consider what the likelihood of
that outcome is. “We have a causal engine in our minds, and we decorate it
with probability,” he says. “This is how
scientists store scientific knowledge.
What leads to what?”
Also, causal reasoning provides
the ability to reason about counterfactuals. If there had not been A, what
would have happened, which gives
the ability to ask why. If you take an
aspirin and you get a stomachache,
you can decide that you should not
have taken it. “You go back and say, ‘I
would have been better off had I not
done what I did,’” he explains.
Philosophers have pondered causality for hundreds of years, Halpern
says, and everyone from epidemiologists to lawyers worries about it.
But when Pearl developed ways of
representing it mathematically, he
changed the discussion. “I think it’s
fair to say he reinvigorated the field of
causality,” says Halpern, who collaborated with him on a study of cause and
explanations. “It was sort of a new way
of looking at things.”
Pearl’s work has implications well
beyond computer science, in areas
such as economics, epidemiology,
and disease diagnosis. It is also contributing to areas of computing such
as machine learning and natural language processing.
If a machine were to reason causally, making decisions about desired
effects, it might have the illusion of
free will, Pearl argues, suggesting that
our own sense of choice may also be a
useful illusion. “Imagine we have robots that communicate as if they had
free will, blaming each other, praising
each other, saying that you could have
done better,” he says. “We’re going to
learn quite a lot about ourselves if we
manage to get a robot community to
communicate in this manner.” And it
could prove very useful to give robots
this sensation of agency. “I feel if evolution equipped us with this illusion,
it has some merit,” says Pearl.
He urges computer scientists to be
undaunted when they are told that a
machine cannot emulate a human
ability. “Don’t take no for an answer.
When you see a phenomenon exhibited by humans, it must be that computers are able to simulate it,” he says.
“We may not reach human level in
computers, it may be just an aspiration, but aspiration leads to positive
outcomes.”
Pearl has not yet decided what to do
with the Turing Award’s $250,000 prize,
but says he would like to devote part of
the money to overcoming skepticism
about his theories in some conservative
scientific circles. Or he might provide a
prize for students who vie against conventional wisdom, he says, “some sort
of incentive to get young people to circumvent their professors.”
Neil Savage is a science and technology writer based in
Lowell, MA.
© 2012 ACM 0001-0782/12/06 $10.00