cheat and Ben-Or et al. leveraged this
to create protocols where they reveal
little knowledge about the proof when
trying to prove their assertion to the
verifier. (Some of the information being leaked in single-prover protocols
was occurring because the prover
needed to prove its honesty, and this
information leakage could now stop.)
Fortno w et al. 16 tried to study the power of multiprover interactions when the
number of provers increased from two to
three and so on. They noticed that more
than two provers does not enhance the
complexity of the assertions that could
be proved to a polynomial time verifier.
Key to this discovery was the notion of
an “Oracle-Interactive Proof.” This is yet
another variation in the theme of interactive proofs where the prover is “
semi-honest.” Specifically, the prover behaves as an oracle—it prepares a table
of answers to every possible question
that may be asked by the verifier and
then honestly answers any sequence
of questions from the verifier according to the prepared set of answers. (In
particular, even if the history of questions suggests that a different answer
to the latest question may increase the
probability that the verifier accepts, the
oracle does not change its mind at this
stage.) Fortnow et al. noted that Oracle-interactive proofs simulate any (
polynomial) number of provers, and are in turn
simulated by 2-prover proof systems
with just one round of interaction (that
is, the verifier asks each of the two provers one question each, without waiting
for any responses from the provers, and
then the provers respond).
Subsequent to Shamir’s result on the
power of IP (single prover interactive
proofs), Babai et al. gave an analogous
6
characterization of the power of MIP.
They showed that the set of assertions
that can be proved in polynomial time
to a probabilistic verifier talking to two
provers is the same as the set of assertions that could be verified in exponential time by a deterministic (classical)
verifier. Thus the interaction with multiple provers reduced verification time
from exponential to a polynomial!
Holographic Proofs, PCPs: In subsequent
work, Babai et al. 5 noticed that the notion of an oracle-interactive proof was
not very different from the classical notion of a proof. If one considers the table
implied by the oracle prover and writes
it down explicitly, one would get a very
long string (potentially exponentially
long in the running time of the verifier),
which in effect was attempting to prove
the assertion. In the result of Babai et
al., 6 this oracle-based proof is not much
longer than the classical proof (both
have length exponential in the length of
the assertion), but the oracle proof was
much easier to check (could be checked
in polynomial time). This led Babai et
al. to name such proofs holographic
(small pieces of the proof reveal its cor-rectness/flaws). Babai et al. focussed on
the computation time of the verifier and
showed (in some careful model of verification) that every proof could be converted into a holographic one of slightly
superlinear size, where the holographic
one could be verified by the verifier in
time that was some polynomial in the
logarithm of the length of the proof.
Around the same time, with a very
different motivation that we will discuss
in the next section, Feige et al. 15 implicitly proposed the concept of a PCP with
the emphasis now being on the query
complexity of the verifier (as opposed
to the computation time in holographic
proofs). The notion of PCP was finally
explicitly defined by Arora and Safra. 4
We stress that the theory of PCPs inherits much more than just the definition of PCPs from the theory of interactive proofs. The results, techniques, and
even just the way of thinking, developed
in the context of interactive proofs
played a major role in the development
of the PCP theorem. In particular, the
notion of 2-player 1-round interactive
proof and their equivalence to oracle-interactive proofs and hence PCPs plays
a significant role in this theorem and we
will use this notion to explain the proofs
from a high level.
implications to
combinatorial optimization
The notion of theorems and proofs has
shed immense light on the complexity
of combinatorial optimization. Consider a prototypical problem, namely graph
coloring, that is, the task of coloring the
vertices of an undirected graph with the
minimum number of possible colors so
that the endpoints of every edge have
distinct colors. The seminal works of
Cook, Levin, and Karp11, 25, 27 show that
this task is as hard as finding a proof of
some given theorem. In other words,
given an assertion A and estimate N on
the length a proof, one can construct a
graph G on O(N 2) vertices with a bound
K, such that G has a coloring with K or
fewer colors if and only if A has a proof
of length at most N. Furthermore, given
a K-coloring of G, one can reconstruct
a proof of A in time polynomial in N.
Thus unless we believe that proofs of
theorems can be found in time polynomial in the length of the shortest proof
(something that most mathematicians
would find very surprising), we should
also believe that graph coloring cannot
be solved in polynomial time.
Of course, graph coloring is just one
example of a combinatorial optimization problem that was shown by the
theory of NP-completeness to be as hard
as the task of theorem-proving. Finding
large independent sets in graphs, finding short tours for travelling salesmen,
packing objects into a knapsack are
all examples of problems for which
the same evidence of hardness applies
(see Garey17 for many more examples).
The NP-completeness theory unified
all these problems into the same one,
equivalent to theorem-proving.
Unfortunately, a somewhat more
careful look into the different problems
revealed many differences among them.
This difference became apparent when
one looked at their “approximability.”
Specifically, we say that an algorithm A
solves a (cost) minimization problem
Õ to within some approximation factor a (n) if on every input x of length n,
A(x) outputs a solution whose cost is no
more than a (n) factor larger than the
minimum cost solution. For (profit)
maximization problems, approximability is defined similarly: An a (n) approximation algorithm should produce a
solution of cost at least the optimum
divided by a (n). Thus a (n) ³ 1 for every
algorithm A and problem Õ.
The NP-completeness theory says
that for the optimization problems listed above find a 1-approximate solution
(that is, the optimum one) is as hard as
theorem-proving. However, for some
NP-complete minimization problems,
it may be possible to find a solution of
cost, say, at most twice the optimum
in polynomial time for every input.
Indeed this happens for the travelling
salesman problem on a metric space
(a space where distances satisfy triangle
inequality). If one finds a minimum