prove the iteration invariant for the sparse case, Theorem 3.
Indeed, we have:
To obtain the second bound in Theorem 3, we solve the error
recursion and observe that
This point completes the argument.
5. 3. Extension to general signals
We now turn to the proof of the main result for CoSaMP,
Theorem A. We must remove the assumption that the signal x is sparse. Although this job seems difficult at first, it
turns out to have an elegant solution. We can view the noisy
samples of a generic signal as samples of a sparse signal corrupted by additional noise that reflects the tail of the signal.
We have the following reduction to the sparse case, of which
Theorem 2 is a consequence.
Lemma 5 (Reduction to Sparse Case19). Let x be an arbitrary vector in CN. The sample vector u = Fx + e can also be
expressed as u = Fxs + where
This reduction will now allow us to prove the iteration
invariant for general signals, Theorem 2. Let x be a general
signal. By Lemma 5, we can write the noisy vector of samples as u = Fxs + . Applying the sparse iteration invariant,
Theorem 3, we obtain
1. Candès e, romberg j., tao t.
robust uncertainty principles:
exact signal reconstruction
from highly incomplete Fourier
information. IEEE Trans. Info.
Theory 52, 2 (Feb. 2006),
489–509.
2. Candès, e.j. the restricted isometry
property and its implications for
compressed sensing. C. R. Math. Acad.
Sci. Paris, Serie I 346 (2008), U589–
U592.
3. Candès, e.j., romberg, j., tao, t.
Stable signal recovery from
incomplete and inaccurate
measurements. Commun.
Pur. Appl. Math. 59, 8 (2006),
1207–1223.
4. Candès, e.j., tao, t. decoding
by linear programming. IEEE
Trans. Info. Theory 51 (2005),
4203–4215.
5. Candès, e.j., tao, t. near optimal
signal recovery from random
projections: Universal encoding
strategies? IEEE Trans. Info.
Theory 52, 12 (dec. 2006),
5406–5425.
6. Cohen, a., dahmen, W., deVore, r.
Compressed sensing and best k-term
approximation. J. Am. Math. Soc. 22, 1
(2009) 211–231.
7. Cormen, t., lesierson, C., rivest, l.,
Stein, C. Introduction to Algorithms,
2nd edition, mit press, Cambridge,
ma, 2001.
8. Cormode, g., muthukrishnan, S.
Combinatorial algorithms for
compressed sensing. technical report,
dimaCS, 2005.
9. dai, W., milenkovic, o. Subspace
pursuit for compressive sensing signal
reconstruction. IEEE Trans. Info.
Theory 55, 5 (2009).
10. donoho, d.l. Compressed sensing.
IEEE Trans. Info. Theory 52, 4
(apr. 2006), 1289–1306.
11. donoho, d.l., Stark, p.b. Uncertainty
principles and signal recovery.
SIAM J. Appl. Math. 49, 3
(june 1989)906–931.
12. donoho, d.l., tsaig, y., drori, i.,
Starck, j.-l. Sparse solution of
underdetermined linear equations
by stagewise orthogonal matching
pursuit (Stomp). Submitted for
publication (2007)
13. Foucart, S. a note on guaranteed
sparse recovery via l1-minimization.
Appl. Comput. Harmon. Anal. (2010).
to appear.
14. gilbert, a., Strauss m., tropp j.,
Vershynin r. algorithmic linear
dimension reduction in the l1
norm for sparse vectors.
in Proceedings of the 44th
Annual Allerton Conference
on Communication, Control
and Computing (Sept. 2006).
We then invoke the lower and upper triangle inequalities to
obtain
Deanna needell (dneedell@stanford.
edu), Stanford University, Stanford,
Ca.
Joel A. Tropp ( jtropp@acm.caltech.edu),
California institute of technology, pasadena,
Ca.
Finally, recalling the estimate for
simplifying, we reach
from Lemma 5 and
where n is the unrecoverable energy (Equation 4).