This subsequent research has shown that there is much
more work to be done in this area. Work in compressed
sensing, matrix completion, and their generalizations
have shown that convex optimization can be used to solve
a myriad of hard identification problems at nearly optimal
rates. But the picture is likely much broader than what we
currently understand. There are likely notions of simplicity
beyond rank and sparsity that can also be leveraged in high-dimensional data analysis to open new frontiers in low-rate
sampling. New work in Chandrasekaran et al.
11 develops a
unified program for recovering simple signals and objects
from incomplete information, illustrating a general approach for translating expert domain knowledge into convex optimization algorithms. This work not only generalizes prior art on compressed sensing and matrix completion
but also provides several new models where low-rate sampling can recover specially structured models. Such new
developments suggest that we have only begun to scratch
the surface of the types of models and objects that may be
recovered from highly incomplete information.
1. amit, y., Fink, M., srebro, n.,
ullman, s. uncovering shared
structures in multiclass
classification. In Proceedings of
the Twenty-fourth International
Conference on Machine Learning
2. argyriou, a., evgeniou, t., Pontil, M.
Multi-task feature learning. In Neural
Information Processing Systems,
3. beck, C., d’andrea, r. Computational
study and comparisons of lFt
reducibility methods. In Proceedings
of the American Control Conference
4. Cai, J.F., Candès, e.J., shen, z. a
singular value thresholding algorithm
for matrix completion. SIAM J. Optim.
20, 4 (2008), 1956–1982.
5. Candes, e.J., li, X., Ma, y., Wright, J.
robust principal component analysis?
submitted for publication. Preprint
available at http://www-stat.stanford.
6. Candès, e.J., Plan, y. Matrix
completion with noise. Proc. IEEE 98,
6 (2009), 925–936.
7. Candès, e.J., romberg, J. sparsity
and incoherence in compressive
sampling. Inverse Probl. 23, 3 (2007),
8. Candès, e.J., romberg, J., tao,
t. robust uncertainty principles:
exact signal reconstruction
from highly incomplete
frequency information. IEEE
Trans. Inform. Theor. 52, 2 (2006),
9. Candès, e.J., tao, t. the power of
convex relaxation: near-optimal
matrix completion. IEEE Trans.
Inform. Theor. 56, 5 (2009),
10. Chandrasekaran, V., Parrilo, P.a.,
Willsky, a.s. latent variable
graphical model selection via
convex optimization. submitted for
publication. Preprint available at
http: // ssg.mit.edu/~venkatc/cpw_
11. Chandrasekaran, V., recht, b.,
Parrilo, P.a., Willsky, a. the
convex geometry of linear inverse
problems. submitted for publication.
Preprint available at arxiv.
24. koren, y., bell, r., Volinsky, C.
Matrix factorization techniques for
recommender systems. Computer 42,
8 (2009), 30–37.
25. liberty, e., Woolfe, F., Martinsson, P.G.,
rokhlin, V., tygert, M. randomized
algorithms for the low-rank
approximation of matrices. Proc. Nat.
Acad. Sci. 104, 51 (2007), 20167–
26. linial, n., london, e., rabinovich,
y. the geometry of graphs
and some of its algorithmic
applications. Combinatorica 15
27. Ma, s., Goldfarb, d., Chen,
l. Fixed point and bregman
iterative methods for matrix rank
minimization. Math. Program. 128,
1 (2011), 321–353.
28. negahban, s., Wainwright, M.J.
restricted strong convexity and
weighted matrix completion:
optimal bounds with noise.
submitted for publication. Preprint
available at arxiv.org/abs/1009.2118,
29. recht, b. a simpler approach to
matrix completion. J. Mach. Learn.
Res. (2009). to appear 2012.
Emmanuel Candès (candes@stanford.
edu), departments of Mathematics and
statistics, stanford university,
Candès was partially supported by a
national science Foundation grant CCF-
515362, by the 2006 Waterman award
(nsF) and by an onr grant.
Preprint available at http://arxiv.org/
30. recht, b., Fazel, M., Parrilo, P.
Guaranteed minimum rank solutions
of matrix equations via nuclear norm
minimization. SIAM Rev. 52, 3 (2010),
31. rennie, J.d.M., srebro, n.
Fast maximum margin matrix
factorization for collaborative
prediction. In Proceedings of
the International Conference
of Machine Learning
32. so, a.M.C., ye, y. theory of
semidefinite programming for
sensor network localization.
Math. Program., Ser. B 109 (2007),
33. sturm, J.F. using seduMi 1.02, a
Matlab toolbox for optimization
over symmetric cones. Optim.
Meth. Software 11–12 (1999),
34. toh, k.-C., yun, s. an accelerated
proximal gradient algorithm for
nuclear norm regularized least
squares problems. Pac. J. Math. 6
Benjamin Recht ( email@example.com)
department of Computer sciences,
university of Wisconsin-Madison, WI.
this work was performed while recht was a
post-doc at the Center for the Mathematics
of Information at the California Institute
of technology. He was partially funded by
onr grant n00014-08-1-0749.
© 2012 aCM 0001-0782/12/06 $10.00