Form proxy
Identification
Support merger
lS estimation
Pruning
Sample update
Total per iteration
Standard multiply
mN
N
s
sm
s
sm
o(mN)
fast multiply
N
s
s
o()
implemented in time O(s), but it may be preferable to sort
the components of the vector and then select the first s at a
cost of O(s log s).
sample Update. This step is dominated by the cost of the
multiplication of F with the s-sparse vector ak.
Table 1 summarizes the cost of each step for the cases in
which the standard and fast multiply is used.
Finally, we may analyze the storage requirements of the
algorithm. Aside from the sampling matrix storage requirement, the algorithm constructs only one vector of length N,
the signal proxy. The sample vectors u and v have length m,
and thus require O(m) storage. Since the signal approximations are sparse, they can be stored using sparse data structures which require at most O(s log N) storage. Similarly, the
index sets that appear require only O(s log N) storage. The
total working storage is therefore O(N).
The following result summarizes these observations.
Theorem 1 (Resource Requirements). Each iteration
of CoSaMP requires O() time, where bounds the cost of a
multiplication with the matrix F or F*. The algorithm uses
working storage O(N).
3. 3. Error reduction bound
The theoretical analysis of the CoSaMP algorithm is based on
an estimate for how much the algorithm reduces the approximation error in each iteration. This result demonstrates that
the algorithm makes significant progress whenever the error
is substantially larger than a certain baseline value. We call
this baseline the unrecoverable energy n in the signal:
The unrecoverable energy is due to the noise in the samples and the fact that the signal is not exactly sparse. For a
detailed discussion, see Section 2. 5 of Needell and Tropp. 19
Theorem 2 (Error Reduction). For each iteration k ≥ 0,
the signal approximation ak is s-sparse and
In particular,
The main consequence of Theorem 2 is the fact that, after
log2(||x||2/h) iterations, the approximation error is no greater
than h + 20n. See Needell and Tropp18, 19 for more discussion.
The statement of Theorem A depends on a simple upper
bound for the l2 term in the unrecoverable energy:
This estimate is a consequence of Lemma 7 of Gilbert. 15
4. RELa TED WoRK
CoSaMP is inspired by algorithmic ideas and analytic techniques that have appeared previously. We briefly summarize
the major algorithmic approaches and compare them with
CoSaMP. This work can be classified in three rough categories: convex relaxation, 4, 11 greedy pursuits, 12, 20, 24 and combinatorial algorithms. 8, 14–16
The convex optimization methods1, 10 attempt to reconstruct signals by solving the mathematical program
where we assume that ||e|| 2 ≤ e. The intuition behind minimizing the l1 norm is that this norm promotes sparsity, and
so the solution to this program should approximate a compressible signal well. Candès et al. 3 establish that each minimizer a of Equation 6 satisfies
whenever F has restricted isometry constant d4s ≤ 0.2. These
restricted isometry constant requirements continue to be
improved. 2, 13 The error bound for CoSaMP is equivalent with
Equation 7, up to the precise value of the constants.