Large-Scale Recommender Systems
Project Summary
Low-rank Matrix factorization in the presence of missing values has become one of the popular techniques to estimate dyadic interaction between entities in many applications such as the friendship prediction in social networks (e.g., Facebook) and the preference estimation in recommender systems (e.g., Netflix). Although there are some existing methods such as alternating least squares (ALS) and stochastic gradient (SG), scalable computation remains the main issue when the matrix contains millions of rows/columns and billions of observed entries. In this project, we are working on various approaches for different variants of low-rank matrix factorization problems.Project Description
The aim of this project is to develop an efficient parallel distributed algorithm for matrix completion. We are specifically interested in solving large industrial scale matrix factorization problems on commodity hardware with limited computing power, memory, and interconnect speed, such as the ones found in data centers. The widespread availability of cloud computing platforms such as Amazon Web Services (AWS) make the deployment of such systems feasible. We have designed the following approaches for large-scale matrix factorization problems, which are shown to be faster and more scalable than other state-of-the-art methods.
- Parallel coordinate descent solver (CCD++): We studied coordinate descent based methods for matrix factorization problems and proposed a new algorithm called CCD++. This new algorithm not only has a more efficient update rule compared to ALS but also enjoys faster and more stable convergence than SG. In addition, CCD++ can be easily parallelized on both multi-core and distributed systems. On a distributed HPC cluster with 256 computing nodes, CCD++ takes only 16 seconds to factorize a 20M*10M matrix with 14.6 billion observed entries, which is 40 times faster than a parallel implementation of SGD.
- Nonblocking and asynchronous stochastic gradient solver (NOMAD): To reduce the unnecessary waiting due to uneven load balancing in traditional approaches, my colleagues and I also studied asynchronous approaches for matrix factorization. We developed an efficient parallel distributed algorithm named NOMAD. This algorithm utilizes the concept of “owner computes” to avoid conflicts without explicit locking. To our best knowledge, it is the first parallel stochastic gradient solver for matrix factorization with 1) lock-free implementation, 2) non-blocking communication, 3) fully asynchronous computation, and 4) a serializable update sequence. Our empirical evaluation demonstrated that not only does NOMAD perform well on Amazon Web Service (AWS) but also outperforms state-of-the-art algorithms on an HPC cluster (Stampede).
Main Publications
- Parallel Matrix Factorization for Recommender Systems (pdf, software)
H. Yu, C. Hsieh, S. Si, I. Dhillon.
Knowledge and Information Systems (KAIS) 41(3), pp. 793-819, December 2014. - NOMAD: Non-locking, stOchastic Multi-machine algorithm for Asynchronous and Decentralized matrix completion (pdf, software)
H. Yun, H. Yu, C. Hsieh, S. Vishwanathan, I. Dhillon.
In International Conference on Very Large Data Bases (VLDB), pp. 975-986, July 2014. - Scalable Coordinate Descent Approaches to Parallel Matrix Factorization for Recommender Systems (pdf, software)
H. Yu, C. Hsieh, S. Si, I. Dhillon.
In IEEE International Conference on Data Mining (ICDM), pp. 765-774, December 2012. (Oral) - Fast Coordinate Descent Methods with Variable Selection for Non-negative Matrix Factorization (pdf, software)
C. Hsieh, I. Dhillon.
In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pp. 1064-1072, August 2011.