quite powerfully, everything we associate with
thought processes.
[ContInUed froM p. 136]
so you turned your attention to artificial intelligence, and to using probability for the representation and acquisition of knowledge.
this is what gets represented, in the
model of belief propagation you then
developed, through Bayesian networks.
If you have a network of loosely connected components, you can reason
probabilistically without encountering
exponential complexity. You can represent it parsimoniously and you can update it swiftly, and, moreover, you can
update it in a distributed fashion. And
that’s very important, because we don’t
have a supervisor in our brain telling
each neuron when to fire.
your work on the topic was influenced
by the writings of psychologist david
rumelhart, who proposed that, while
we read, our brain’s neural modules
each perform simple, repetitive tasks,
and that these modules use both top-down and bottom-up modes of inference to collaborate with one another.
If you pose these features as an architecture for doing things probabilistically, you ask yourself, When can
we do it distributedly and still get the
probabilistically correct answer to every question? And that led to a tree architecture and a proof that it converges
eventually to the answers that orthodox
probability theory dictates. And then
came polytrees and the ultimate question of how we can do it when we have
a general loopy network. Here I conjectured that the mind simply ignores the
loops and allows every processor to act
as if it was embedded in a polytree—
and this worked miraculously well.
At this point, the practitioners took
over, and they were able to do it much
“i called it
do-calculus because
it allows you to
reduce questions
about the effect
of interventions
to symbolic
manipulations.”
better than I. Even the theoreticians
did better than I—they proved convergence under various conditions, and
essentially I left this area to more talented and motivated researchers.
you left probability to work on causation.
Yes, primarily because it became
clear that people encode world knowledge through causal, not probabilistic, relationships, and all those fancy
notions of relevance and irrelevance
come from causal, not probabilistic,
considerations.
among your best-known accomplish-
ments is the creation of a calculus of in-
tervention that enables us to compute
the consequences of various actions.
The idea was to treat actions and observations as distinct symbols situated
within the same formal sentence. This
allows you to infer the consequences
of actions from a combination of data
and qualitative knowledge encoded in
the form of a causal diagram.
In other words, it is where correlation
and causation meet.
Yes, they meet in the calculus. I
called it do-calculus because it allows
you to reduce questions about the effect of interventions to symbolic manipulations. You want to predict what
will happen if you do something based
on what you observe. So you express
this question in symbolic algebra and
you can ask the question “What if I do
x?” or “What if I see y?” as well as any
other combination of doing and seeing.
Then you submit the query to the inference engine and let it grind through until it gets you the right results.
Simulating intervention, by the way,
was an idea that was thought of by economists in 1943. Trygve Haavelmo had this
idea that economics models are a guide
to policy-making, and that you can predict what will happen when the government intervenes and raises taxes or imposes duties by modifying the equations
in the model. And that was taken on by
other economists, but it didn’t catch, because they had very lousy models of the
economy, so they couldn’t demonstrate
success. And because they couldn’t
demonstrate success, the whole field
of economics regressed and became a
hotbed for statistical predictions. Economists have betrayed causality. I never
expressed it this way before, but in all
honesty this is what it boils down to. In
computer science, we remain faithful
to logic and try to improve our models,
while economists compromised on logic to cover up for bad models.
your work on causality culminated in
counterfactuals.
There are three levels of causal relationships. The zero level, which is the
level of associations, not causation,
deals with the question “What is?” The
second level is “What if?” And the third
level is “Why?” That’s the counterfactual level. Initially, I thought of counterfactuals as something for philosophers to deal with. Now I see them as
just the opposite. They are the building
blocks of scientific understanding.
does your research inform your work at
the daniel pearl foundation, especially
in conducting interfaith dialogues?
I have an advantage over my dialogue partners in that I’m an atheist,
and I understand religious myths are
just metaphors, or poetry, for genuine
ideas we find difficult to express otherwise. So, yes, you could say I use computer science in my religious dialogues,
because I view religion as a communication language. True, it seems futile
for people to argue if a person goes to
heaven from the East Gate or the West
Gate. But, as a computer scientist, you
forgive the futility of such debates, because you appreciate the computational role of the gate metaphor.
Leah hoffmann is a technology writer based in brooklyn,
ny.
© 2012 aCM 0001-0782/12/06 $10.00