logarithmic in the ambient dimension. A random sign matrix
can be used to construct the random masks required by the
Rice single-pixel camera.
The subsampled Fourier matrix consists of a collection
of m rows chosen uniformly at random from the discrete
Fourier transform matrix. The best theoretical bound for the
restricted isometry constant23 is
It is conjectured that the actual scaling is m s log(N). The
subsampled Fourier matrix describes measurements that
can be acquired by an MRI machine. Note that a subsampled Fourier matrix can be applied to a vector using O(N
log N) arithmetic operations, and it requires only O(m log
N) bits of storage. These computational properties are very
important in practice, as we will see.
2. 4. Signal reconstruction via CoSamP
When the sampling matrix F stably embeds the set of s-sparse
signals, it is possible in principle to recover an arbitrary
s-sparse signal from its samples. The major computational
question in CoSa is to determine how to reconstruct the sparse
signal using a tractable algorithm. For practical applications,
we also need to understand how the reconstruction is affected
when the signal is merely compressible and the samples are
contaminated by noise. The following issues are crucial:
1. Can the algorithm recover the signal under a variety of
sampling schemes? In other words, is the class of measurement maps for which the algorithm can succeed
large?
2. Can the algorithm reconstruct the signal from a minimal number m of samples?
3. Is signal recovery robust when samples are contaminated with noise or the signal is not exactly sparse? Is
the error bound optimal for each target signal?
4. Does the algorithm offer provably efficient computational cost and storage?
This abstract discusses a reconstruction algorithm called
Compressive Sampling Matching Pursuit (CoSaMP) that
satisfies all of the foregoing criteria. Our original publications18, 19 were the first to prove that a CoSa signal reconstruction algorithm could achieve essentially optimal resource
usage.
properties of the CosaMp algorithm: Let us state and explain
our main result. We postpone the details of the algorithm
and the analysis until later.
Theorem A (CoSaMP). Suppose that F is an m × N sampling
matrix with restricted isometry constant d2s ≤ c. Let u = Fx + e
be a vector of samples of an arbitrary signal, contaminated with
arbitrary noise.
The CoSaMP algorithm takes as input a routine to multiply F
and F by a vector, the vector u of samples, the sparsity param-
eter s, and a precision parameter h. The output is an s-sparse sig-
nal approximation a that satisfies
The running time is O( • log(||x||2/h) ), where bounds the
cost of a matrix–vector multiply with F or F*. Working storage
is O(N). The constants c < 1 and C > 1 are absolute.
How does this result deliver the desired performance?
1. The algorithm requires only that the matrix satisfy a
bound on the restricted isometry constant. Many types
of random matrices (and certain deterministic matrices) have this property. As a consequence, the algorithm can be used with many measurement
technologies.
2. The number m of samples required to obtain the
bound d2s ≤ c depends on the type of sampling matrix.
For many types of random matrices, m = O(s logaN) for
a small integer a. Since this value is optimal up to the
constants, the algorithm can recover signals from a
minimal number of samples.
3. The error bound demonstrates that the algorithm succeeds for all signals, even when there is noise in the
samples. As we expect, the algorithm performs best for
sparse and compressible signals. Indeed, Equation 2
shows that CoSaMP produces an approximation a of a
p-compressible signal x that satisfies
x− a 2 ≤ C max{h,R s 1/2 – 1/p+ e 2},
which is the best possible approximation error we
could hope for in general. 6
3. ThE CoSamP aLGoRi Thm
The CoSaMP algorithm is, at heart, a greedy iterative method
for reconstructing a signal from compressive samples. This
section provides an overview of the algorithm and its implementation. We also explain the basic idea behind the analysis of the algorithm, which demonstrates that each iteration