r ≥ | T supp(x)|,
The last consequence of the restricted isometry property
that we will need measures how much the sampling matrix
inflates nonsparse vectors. Its proof uses arguments from
functional analysis.
Proposition 3 (Energy Bound). Suppose F verifies the
upper inequality of ( 3), viz.
Then, for every signal x,
5. 2. iteration invariant: sparse case
In proving Theorem 2, we first operate under the assumption
that the signal x is exactly s-sparse. We remove this assumption later.
Theorem 3 (Sparse Error Reduction). Assume x is s-sparse.
For each k ≥ 0, the signal approximation ak is s-sparse, and
In particular,
Remark 3. This bound assumes the least-squares step in the
algorithm is performed without error. In Section 5 of Needell
and Tropp, 19 we study how many iterations of a least-squares
solver are needed to make the resulting error negligible. We
show that, if we use conjugate gradient for the estimation step
of CoSaMP, then initializing the least-squares algorithm with
the current approximation ak− 1, then Theorem 3 still holds.
The proof of the iteration invariant, Theorem 3 involves a
sequence of short lemmas, one for each step in the algorithm.
We fix an iteration k, and let a = ak− 1 be the signal approxima-
tion at the beginning of the iteration. We define the residual
as r = x − a, which must be 2s-sparse since both a and x are
s-sparse. We also observe that the vector v of updated sam-
ples can be interpreted as noisy samples of the residual:
The first step in the algorithm is the identification step,
in which a set of components is found corresponding to
locations where the residual signal has a lot of energy. This
is summarized by the following lemma which is proven in
Needell and Tropp. 19
Lemma 1 (Identification). The set W = supp(y2s), where
y = F*v is the signal proxy, contains at most 2s indices, and
Next, the algorithm merges the support of the current
estimate a with the new selection of components. The following simple lemma shows that components of the signal
x outside this set have small energy.
Lemma 2 (Support Merger). Let W be a set of at most 2s indices. The set T = W supp(a) contains at most 3s indices, and
The estimation step in the algorithm obtains values
for coefficients in the set T by solving a least-squares
problem. The next result bounds the error of this
approximation.
Lemma 3 (Estimation19). Let T be a set of at most 3s indices,
and define the least-squares signal estimate b by the formulae
where u = Fx + e. Then
Proof. Note first that
Using the expression u = Fx + e and the fact
calculate that
, we
The cardinality of T is at most 3s, and x is s-sparse, so
Proposition 1 and Corollary 1 imply that
Combine the bounds to reach
Finally, invoke the hypothesis that d3s ≤ d4s ≤ 0.1.
Lemma 4 (Pruning19). The pruned approximation bs satisfies
At the end of the iteration, the new approximation is formed
by setting ak = bs. The sequence of lemmas above allows us to