In your lecture at the university of Min-
nesota you noted that you need to re-
mind some of your new students that
all problems in nP are not hard.
I gather that that’s how this mythology
sprung up that NP problems are hard?
Some people even think nP stand
for “Not P,” but it stands for non-deter-ministic polynomial time. So we have
these contrasting classes, P and nP.
The simplistic assumption is that the
P ones are the easy ones, and the nP
are the hard ones. And of course P is
the subset of nP. It’s the nP-complete
ones, the subset of nP problems, which
are the hard ones.
Much of this is covered in the chapters
of just about any introductory comput-
er science textbook.
The idea of time being the most important complexity measure seems rather
straightforward to me now because I’ve
heard it and read it several places, but
it apparently wasn’t.
I think time was an important measure. It was Alan Cobham who was trying to think of some intrinsic measure
like “work,” but in fact his theorem
was about the characterization of polynomial time, so that was the thing he
talked about—time. Time seemed to
be the most obvious measure of complexity. Certainly space memory was
also considered right from the start.
You and Richard Karp were colleagues
We overlapped at Berkeley. He came
to Berkeley from IBM a year or two before I left, so I knew him.
so he returned home from the sIGac T
symposium, and started looking at
these problems more carefully.
Yes, that’s right. I think he toured
the states talking to people about them,
and coming up with new problems.
Is it fair to say, then, that Karp is your
Yes. He did a tremendous thing—
there’s no question about it. I certainly
didn’t realize there were so many natural computational problems out there
that turned out to be nP-complete.
i’ve always been
interested in things
that had a logical
flavor to them, like
Between 1970 and 1980 you received several grants from the national Research
council to work on this problem and
others. In 1975, you are promoted to full
professor at the university of Toronto.
and then followed a number of awards:
the e.w.R. steacie Memorial fellowship to support fundamental research
essential to the development of science; the Izaak walton Killam Memorial Research fellowship from the canada council for the arts.
and in that period too, in 1982, you
were awarded the acM a.M. Turing
award for, among other things, your
contributions of complexity theory.
Yes. And of course the trigger was
the theory of nP-completeness.
In 1985 you became a university pro-
fessor. numerous teaching awards
You were awarded the cRM/fields
Prize in 1998. what is the fields Insti-
tute? a mathematics institute?
It’s a mathematics institute, right.
It’s on our campus although I guess it’s
separate. It’s a bit like the Isaac Newton Center at Cambridge. They have
a building for mathematics research.
They sponsor programs and they have
emphasis programs in different areas
The Mathematical Intelligencer
declared the P versus NP problem one of
the three greatest math problems of
the next century. where does this perception come from?
It seems to be really relevant to the
real world—probably more than the
other problems on the list. It’s not clear
what impact on the world the other
very interesting problems have, though
they certainly could impact mathematics. If P equals nP, it could have a dramatic effect on the world. More likely
P is not equal to nP. There the impact
would still be good. It would lead to the
possibility of proving cryptographic
protocols are secure, which is something we can’t hope to do at present.
one of your audience members said
that NP-completeness is sometimes
identified on the basis of—as he called
it—an “unrealistic” example. I gathered he was trying to argue that there
was a disconnect between the theory
and human experience on some level.
I think he was referring to the fact
that some problems are nP-complete
but still seem to be, in practice, solvable. So that’s a question of what class
of inputs you want to use. Every nP-
complete problem is easy to solve for
some inputs and maybe in some cases
the inputs you really are interested in
are the easy ones. So, in that case, saying it’s nP-complete is misleading.
Even for the original nP-complete problem of satisfiability, the fact that it’s
nP-complete hasn’t stopped this big
industry of programs that they’re solving the satisfiability problem with—in
some cases, very dramatic and useful
successes. As I mentioned in the talk,
they’ve been able to verify large chunks
of a microprocessor by proving unsatisfiable gaps that it causes with the tens
of thousands of variables. And so there
are certainly some. Just because the
problem is nP-complete does not mean
that you should not try to solve it.
I’m wondering if this isn’t the same
stumbling block, on a very general level, that other disciplines are struggling
with. In genomics and bioinformatics
they now talk about empirical laws that
haven’t found their theory yet. They
say, “don’t worry about proving these
things—they’re empirically derived
In bioinformatics there are lots of
computational problems as you say,
and I’m sure that many of them are nP-
complete or nP-hard in their full generality and that just means you have to