as Good as it Gets
Sanjeev Arora talks about proof, intractability,
and finding the best way to approximate problems.
sanJeeV arora has long been drawn to
his field’s most intractable challenges.
Born in Rajasthan, India, the Princeton
University professor is best known for
his work on the development of prob-abilistically checkable proofs and a
proof of the PCP theorem, which has
helped distinguish between problems
that can and cannot be approximated
efficiently. He has also contributed new
approximation algorithms for classic
problems like the Euclidean traveling
salesman problem and graph partitioning. More recently, he has cast new
light on the Unique Games Conjecture,
a problem that goes beyond the PCP
theorem to imply that the known approximation algorithms for many simple problems are the best possible.
ing an approximate solution is no easier
than computing the optimum solution.
The PCP theorem itself doesn’t tell you
the problems for which it’s impossible
to calculate approximate solutions. But
it provided the starting point for other
results that do, and many such results
have been proved in the subsequent
two decades. And after the full body of
work, we have been able to chart the
landscape of those problems. For many
problems, we have even determined the
exact threshold of approximability: We
have algorithms that approximate it up
to some factor, and we know that doing
any better is NP-complete. Of course,
such results also sometimes involved
designing new approximation algorithms for various problems.
Let’s start with your work on the pcp
theorem, which, if i understand correctly, states that for anything you can
prove, you can write the proof in such a
way that it can be verified by looking at
only a few bits.
That’s the basic idea. The concept
was beginning to be developed just
before I started grad school. I started
working on it with some collaborators,
and that became my dissertation.
be proved using the same technique.
It’s a smaller statement, and you can
use the same ideas to give a proof
for that. Of course, this is the usual
computer science notion of “
recursion” and it increases the efficiency.
The overall procedure is algebraic.
You don’t normally think of verifying
proofs as an algebraic procedure, but
we were able to express it algebraically. This is actually quite common in
mathematics—you’re trying to prove
something about, say, knots or strings
and you encode it as an algebraic question, and from then on you can use
tools from algebra.
You helped develop some of those approximation algorithms yourself, including one for the euclidian traveling
salesman problem (tsp), a variation
of the classic np-complete problem in
which a salesman is trying to find the
shortest path by which he can visit all
of his N clients.
As I thought about the PCP theorem,
I also began wondering about whether
it was hard to compute approximate
solutions to the Euclidean TSP, which
is a classic problem. And I slowly started seeing structure in the problem that
nobody else had until that point, which
led to the algorithm.
You uncovered several key insights.
can you explain them?
One was the idea that we could
improve the efficiency of verification
by using a kind of circular reasoning.
For instance, if I’m writing a proof
that somebody else is going to verify
by looking at just a few bits, then the
statement that the verifier will accept
the proof is itself something that can
You also introduced a new definition
It was an extension of NP-complete-ness that allowed people to prove that
for certain NP-complete problems, it is
NP-complete even to compute approximate solutions. In other words, comput-
the idea sounds simple: divide and
It’s something that people had tried
before and had given up on. Say you have
a square that encloses all the points,
and you imagine dividing the square
into subsquares. And somehow you
compute a reasonable tour within each
subsquare, and then you put them all
together. I put a twist on that idea. In the
classic divide-and-conquer scheme, you
solve each subsquare independently.
My technique was to allow a little bit of
dependence, [continUed on p. 127]