Abstract: We present a new algorithm for minimizing a
convex loss-function subject to regularization.
Our framework applies to numerous problems
in machine learning and statistics; notably, for
sparsity-promoting regularizers such as `1 or
`1;1 norms, it enables efficient computation of
sparse solutions. Our approach is based on the
trust-region framework with nonsmooth objectives,
which allows us to build on known results
to provide convergence analysis. We avoid
the computational overheads associated with the
conventional Hessian approximation used by
trust-region methods by instead using a simple separable quadratic approximation. This approximation also enables use of proximity operators for tackling nonsmooth regularizers. We illustrate the versatility of our resulting algorithm by specializing it to three mixed-norm regression problems: group lasso [36], group logistic regression [21], and multi-task lasso [19]. We experiment with both synthetic and real-world large-scale data—our method is seen to be competitive, robust, and scalable.
Download: pdf
Citation
- A Scalable Trust-region Algorithm with Application to Mixed-Norm Regression (pdf, software)
D. Kim, S. Sra, I. Dhillon.
In International Conference on Machine Learning (ICML), pp. 519-526, June 2010.
Bibtex: