by a family of polynomial-depth circuits. There is a natural two-stage process associated with this task that is illustrated in Figure 2. The first stage of this process turns out to be
fairly straightforward, using elementary facts about quantum
computations and bounded-depth circuits. The second stage
is more difficult, and the algorithm for accomplishing it is representative of the main technical contribution of this work.
The remainder of the paper is devoted to a discussion of this
algorithm.
4. ThE main aLGoRi Thm
The algorithm corresponding to the second stage of the
computation illustrated in Figure 2 will now be described.
As is required to prove QIP = PSPACE, it is a highly parallel
algorithm for approximating the value of the optimization
problem described at the end of Section 2. This optimization problem is an example of a semidefinite program (or
SDP for short): it asks for the maximum value of a linear
function of a collection of matrix variables that are subject to a collection of linear and positive semidefinite
constraints.
SDPs are well studied and subject to an analytically
powerful duality theory that will be useful in the section
following this one. There do exist efficient algorithms to
approximate solutions to most SDPs, including the theoretically important ellipsoid method and the more practical
family of interior point methods. Unfortunately these algorithms do not parallelize, and there is a generally-believed
complexity-theoretic conjecture (namely NC ≠ P) that
implies that no other general and efficient algorithm will
parallelize either: some SDPs seem to force algorithms that
approximate them to be inherently sequential. For this reason, the algorithm to be presented below will, presumably
by necessity, exploit the specific nature of the SDP at hand
to allow for its parallelizability. It does not approximate
solutions to general SDPs, only those arising from the specific type of quantum interactive proof system described in
Section 2.
The algorithm employs a method from learning theory
and combinatorial optimization sometimes known as the
matrix multiplicative weights update method, and in particular makes use of a variant of this technique developed by
Manfred Warmuth and Dima Kuzmin16 and Sanjeev Arora
and Satyen Kale. 1
The algorithm is iterative in nature: for successive values
of t = 0, 1, 2,…, the algorithm will generate K 2 × K 2 matrices r(t) 0
and r(t) 1 that, roughly speaking, correspond to guesses for r0
and r1 in the SDP. In addition, the algorithm also generates
a K × K matrix s (t) for each t, which intuitively represents a
common “target” state for
PartialTrace (r(t) 0) and PartialTrace (r(t) 1).
Like many iterative algorithms, there is a balance
between the algorithm’s rate of convergence and accuracy.
It is essential for the parallelizability of the algorithm that
r(t) 0, r(t) 1 , and s (t) converge quickly to choices that reveal a near-
optimal solution; but for the sake of the algorithm’s accu-
racy it must not move too rapidly in any one direction, for
fear of overreacting to poor choices that had the appearance
of good ones within a small region of the search space.
1. Set
for 1 denoting the K × K identity matrix. (These matrices represent completely random quantum states,
reflecting the fact that no information is present in the
initial guesses for r0, r1, and s.)
M(t) 0 ← gs (t) − Partial Trace (P0r(t) 0 P0)
M(t) 1 ← gs (t) − Partial Trace (P1r(t) 1 P1).
(b) Compute the K × K matrices ∆0(t) and ∆ 1(t) that project
onto the negative eigenspaces of M0(t) and M1(t),
respectively.
(c) Compute
b (t) 0 ← Trace ((∆(t) 0 ⊗ 1) P0 r (t) 0 P0)