Other algorithms based on OMP have been formulated,
such as regularized OMP, or ROMP, 20 developed by Needell
and Vershynin. ROMP is noteworthy because the authors
establish that under the restricted isometry property, the algorithm can approximately recover compressible signals from
noisy samples. The error bound and measurement requirements provided by these results are analogous with those of
the convex optimization method, modulo parasitic logarithmic terms. Our results for CoSaMP remove all the extraneous
factors, so the performance of CoSaMP is essentially optimal.
Finally, we mention that there is a class of algorithms for
signal recovery that have sublinear (in the signal dimension)
runtime. Some examples are the Fourier sampling algorithm
(FS) of Gilbert et al., 16 chaining pursuit, 14 and HHS pursuit. 15
These methods are very fast, but they require highly structured measurements.
Table 2 summarizes the behavior of the above algorithms
in terms of the following five criteria.
General samples. Does the algorithm succeed for a variety
of sampling schemes, or does it require structured samples?
The designation “RIP” implies that a bound on a restricted
isometry constant suffices. “Gauss” means that the algorithm succeeds when the sampling matrix F has iid sub-gaussian entries.
optimal number of samples. Can the algorithm reconstruct
s-sparse signals from a near-optimal number of measurements, m = O(s log N)? Or are its sampling requirements
higher (by logarithmic factors)?
Uniformity. Given a fixed sampling matrix, does the algorithm recover all signals? Or do the results hold with high
probability only for each fixed signal?
stability. Does the algorithm succeed when the signal is
compressible (but not sparse) and the samples are noisy?
running time. In the worst case (without fast multiplies), how
long does it take the algorithm to recover a real-valued s-sparse
signal within a fixed relative precision, given a sampling
matrix with no special structure? The designation LP(N, m)
represents the cost of solving a linear program with N variables
and m constraints (O(m2N1.5) using an interior-point method).
Under all of these metrics, CoSaMP achieves the best
performance out of the linear and superlinear methods. Of
Table 2. Comparison of several signal recovery algorithms
General samples
opt. samples
uniformity
Stability
running time
CoSaMP
rIP
Yes
Yes
Yes
o(mN)
RomP
rIP
No
Yes
Yes
o(smN)
Convex optimization
rIP
Yes
Yes
Yes
lP(N, m)
General samples
opt. samples
uniformity
Stability
running time
omP
Gauss
Yes
No
?
o(smN)
fourier Sampling
No
No
No
Yes
s polylog(N)
hhS Pursuit
No
No
Yes
Yes
poly(s log N)
course, CoSaMP is slower than the sublinear algorithms,
but it allows more general sampling matrices and demands
fewer samples, which makes it more adaptable to practical
applications. CoSaMP delivers the same guarantees as the
best convex optimization approaches as well as rigorous
bounds on computational cost and storage that are comparable with faster greedy methods. Thus, CoSaMP is essentially optimal in every important respect.
5. PRoof of RESuLTS
The CoSaMP algorithm uses an approach inspired by the
restricted isometry property to identify the largest s
components of the target signal. Owing to the restricted isometry property, each set of s components of the proxy vector
y = F*Fx approximates the energy in the corresponding s
components of x. Since the samples are of the form u = Fx,
we can obtain the proxy by applying the matrix F to the
samples u. Once the set T of significant locations is identified, the signal coefficients can be recovered using F†T.
The algorithm repeats this idea at each iteration, updating the samples to reflect the residual (the part of the signal
yet to be approximated). We use least squares to estimate
the signal on this support set and repeat this process until
the recoverable energy in the signal has been found.
We now outline the proof of Theorem 2.
5. 1. Consequences of the RiP
We begin with some important consequences of the
restricted isometry property. Omitted proofs appear in
Needell and Tropp. 19
Proposition 1 (Consequences). Suppose F has restricted
isometry constant dr. Let T be a set of r indices or fewer. Then
where the last two statements contain an upper and lower
bound, depending on the sign chosen.
A corollary of these bounds is the fact that disjoint sets of
columns from the sampling matrix span nearly orthogonal
subspaces.
Proposition 2 (Orthogonality). Suppose F has restricted
isometry constant dr. Let S and T be disjoint sets of indices whose
combined cardinality does not exceed r. Then
We apply this result through a corollary.
Corollary 1. Suppose F has restricted isometry constant dr.
Let T be a set of indices, and let x be a vector. Provided that