at Berkeley was excited about getting
me. There I taught two things: organization theory à la March and Simon,
and the new discipline called Artificial
There were no books on the subject
of AI, but there were some excellent
papers that Julian Feldman and I pho-tocopied. We decided that we needed
to do an edited collection, so we took
the papers we had collected, plus a few
more that we asked people to write,
and put together an anthology called
Computers and Thought that was published in 1963.
The two sections mirrored two
groups of researchers. There were
people who were behaving like psychologists and thinking of their work
as computer models of cognitive processes, using simulation as a technique. And there were other people
who were interested in the problem of
making smart machines, whether or
not the processes were like what people were doing.
How did choosing one of those
lead you to Stanford?
The choice was: do I want to be a psychologist for the rest of my life, or do I
want to be a computer scientist? I looked
inside myself, and I knew that I was a
techno-geek. I loved computers, I loved
gadgets, and I loved programming. The
dominant thread for me was not going
to be what humans do, it was going to be
what can I make computers do.
I had tenure at Berkeley, but the business school faculty couldn’t figure out
what to make of a guy who is publishing
papers in computer journals, artificial
intelligence, and psychology. That was
the push away from Berkeley. The pull
to Stanford was John McCarthy.
How did you decide on your
Looking back in time, for reasons that
are not totally clear to me, I really, really wanted smart machines. Or I should
put the “really” in another place: I really wanted really smart machines.
I wasn’t going to get there by walking down the EPAM road, which models verbal learning, or working on puz-zle-solving deductive tasks. I wanted
to model the thinking processes of
scientists. I was interested in problems
of induction. Not problems of puzzle
ai is not much of a
it needs to work
in specific task
solving or theorem proving, but inductive hypothesis formation and theory
I had written some paragraphs at
the end of the introduction to
Computers and Thought about induction and
why I thought that was the way forward
into the future. That’s a good strategic plan, but it wasn’t a tactical plan. I
needed a “task environment”—a sandbox in which to specifically work out
ideas in detail.
I think it’s very important to emphasize, to this generation and every
generation of AI researchers, how important experimental AI is. AI is not
much of a theoretical discipline. It
needs to work in specific task environments. I’m much better at discovering
than inventing. If you’re in an experimental environment, you put yourself
in the situation where you can discover
things about AI, and you don’t have to
Talk about DENDRAL.
One of the people at Stanford interested in computer-based models of mind
was Joshua Lederberg, the 1958 Nobel
Prize winner in genetics. When I told
him I wanted an induction “sandbox”,
he said, “I have just the one for you.”
His lab was doing mass spectrometry
of amino acids. The question was:
how do you go from looking at a spectrum of an amino acid to the chemical
structure of the amino acid? That’s
how we started the DENDRAL Project:
I was good at heuristic search methods, and he had an algorithm which
was good at generating the chemical
We did not have a grandiose vision.
We worked bottom up. Our chem-
ist was Carl Djerassi, inventor of the
chemical behind the birth control
pill, and also one of the world’s most
respected mass spectrometrists. Carl
and his postdocs were world-class ex-
perts in mass spectrometry. We began
to add in their knowledge, inventing
knowledge engineering as we were go-
ing along. These experiments amount-
ed to titrating into DENDRAL more and
more knowledge. The more you did
that, the smarter the program became.
We had very good results.
We needed to play in other playpens. I
believe that AI is mostly a qualitative science, not a quantitative science. You are
looking for places where heuristics and
inexact knowledge can come into play.
The term I coined for my lab was “
Heuristic Programming Project” because
heuristic programming is what we did.
For example, MYCIN was the Ph.D.
thesis project of Ted Shortliffe, which
turned out to be a very powerful knowledge-based system for diagnosing
blood infections and recommending
their antibiotic therapies. Lab mem-