Abstract: A hallmark of modern machine learning is its ability to deal with high dimensional
problems by exploiting structural assumptions that limit the degrees of freedom in
the underlying model. A deep understanding of the capabilities and limits of high
dimensional learning methods under specific assumptions such as sparsity, group
sparsity, and low rank has been attained. Efforts [1, 2] are now underway to distill
this valuable experience by proposing general unified frameworks that can achieve
the twin goals of summarizing previous analyses and enabling their application
to notions of structure hitherto unexplored. Inspired by these developments, we
propose and analyze a general computational scheme based on a greedy strategy
to solve convex optimization problems that arise when dealing with structurally
constrained high-dimensional problems. Our framework not only unifies existing
greedy algorithms by recovering them as special cases but also yields novel ones.
Finally, we extend our results to infinite dimensional settings by using interesting
connections between smoothness of norms and behavior of martingales in Banach
spaces.
Download: pdf
Citation
- Greedy Algorithms for Structurally Constrained High Dimensional Problems (pdf, software)
A. Tewari, P. Ravikumar, I. Dhillon.
In Neural Information Processing Systems (NIPS), December 2011.
Bibtex: