Abstract: In this paper, we study low-rank matrix nearness problems, with a focus on learning low-rank positive semidefinite (kernel) matrices for machine learning applications. We propose efficient algorithms that scale linearly in the number of data points and quadratically in the rank of the input matrix. Existing algorithms for learning kernel matrices often scale poorly, with running times that are cubic in the number of data points. We employ Bregman matrix divergences as the measures of nearness—these divergences are natural for learning low-rank kernels since they preserve rank as well as positive semidefiniteness. Special cases of our framework yield faster algorithms for various existing learning problems, and experimental results demonstrate that our algorithms can effectively learn both low-rank and full-rank kernel matrices.
Download: pdf
Citation
- Low-Rank Kernel Learning with Bregman Matrix Divergences (pdf, software)
B. Kulis, M. Sustik, I. Dhillon.
Journal of Machine Learning Research (JMLR) 10, pp. 341-376, February 2009.
Bibtex: