Divide & Conquer Methods for Big Data Analytics
Project Summary
Recently, solving large-scale machine learning problems has become a very important issue. Many of the state-of-the-art approaches for such problems rely on numerical optimization, however, because of the scalability issues, usually one cannot directly apply classical optimization methods to solve large-scale problems. In this project, we apply divide and conquer scheme to handle big data. In the divide step, the large-scale problem is decomposed into several smaller subproblems. Each subproblem is defined only on a subset of data and can be efficiently solved. Solutions to the sub-problems are then combined to give a solution to the original problem. However, developing a divide and conquer algorithm is usually nontrivial — a good algorithm should partition data/variables in a certain way so that (1) the subproblems can be solved efficiently, and (2) the solutions to the subproblems can be easily combined to give the solution to the original problem. We demonstrate the efficiency of the divide-and-conquer scheme on several important machine learning topics: classification, kernel approximation, and link prediction.Project Description
1. A Divide-and-Conquer Solver for Kernel Support Vector Machines
The kernel support vector machine (SVM) is one of the most widely used classification methods; however, the amount of computation required becomes the bottleneck when facing millions of samples. We propose and analyze a novel divide-and-conquer solver for kernel SVMs (DC-SVM).
- Divide Step: In the divide step, we partition the kernel SVM problem into smaller subproblems by clustering the data, so that each subproblem can be solved independently and efficiently. Based on the theoretical analysis, we run kernel kmeans on subsamples to partition the data.
- Conquer Step: In the conquer step, the local solutions from the subproblems are used to initialize a global coordinate descent solver, which converges quickly as suggested by our analysis.
- Early Termination: Prediction using the l-th level solution; faster training and prediction speed. The prediction accuracy is usually close to the global SVM solution.
Fast Training. By extending this idea, we develop a multilevel Divide-and-Conquer SVM algorithm which outperforms state-of-the-art methods in terms of training speed, testing accuracy, and memory usage. As an example, on the covtype dataset with half-a-million samples, DC-SVM is 7 times faster than LIBSVM in obtaining the exact SVM solution (to within 10-6 relative error) which achieves 96.15% prediction accuracy. Moreover, with our proposed early prediction strategy, DC-SVM achieves about 96% accuracy in only 12 minutes, which is more than 100 times faster than LIBSVM.
Fast Prediction. We further improved the prediction speed for kernel machines by combining DC-SVM with the “pseudo landmark points” technique that reduce the prediction error without increasing prediction cost. As a result, on the Covertype dataset with half-a-million samples, DC-SVM requires only 10 minutes training time and just 22 inner products for predicting each sample (100 times faster than the state-of-the-art kernel SVM solvers in both training and prediction time) while achieving near-optimal prediction accuracy.
2. Memory Efficient Kernel Approximation
The scalability of kernel machines is a big challenge when facing millions of samples due to storage and computation issues for large kernel matrices, that are usually dense. Recently, many papers have suggested tackling this problem by using a low-rank approximation of the kernel matrix. In this work, we first make the observation that the structure of shift-invariant kernels changes from low-rank to block-diagonal (without any low-rank structure) when varying the scale parameter. Based on this observation, we propose a new kernel approximation algorithm — Memory Efficient Kernel Approximation, which considers both low-rank and clustering structure of the kernel matrix.
We show that the resulting algorithm outperforms state-of-the-art low-rank kernel approximation methods in terms of speed, approximation error, and memory usage. As an example, on the MNIST dataset with two-million samples, our method takes 550 seconds on a single machine using less than 500 MBytes memory to achieve 0.2313 test RMSE for kernel ridge regression, while standard Nystr”{o}m approximation takes more than 2700 seconds and uses more than 2 GBytes memory on the same problem to achieve 0.2318 test RMSE.
3. Multi-Scale Spectral Decomposition
Computing the $k$ dominant eigenvalues and eigenvectors of massive graphs is a key operation in numerous machine learning applications; however, popular solvers suffer from slow convergence, especially when $k$ is reasonably large. In this project, we propose and analyze a novel multi-scale spectral decomposition method (MSEIGS), which first clusters the graph into smaller clusters whose spectral decomposition can be computed efficiently and independently.
We show theoretically as well as empirically that the union of all cluster’s subspaces has significant overlap with the dominant subspace of the original graph, provided that the graph is clustered appropriately. Thus, eigenvectors of the clusters serve as good initializations to a block Lanczos algorithm that is used to compute spectral decomposition of the original graph. We further use hierarchical clustering to speed up the computation and adopt a fast early termination strategy to compute quality approximations.
Our method outperforms widely used solvers in terms of convergence speed and approximation quality. Furthermore, our method is naturally parallelizable and exhibits significant speedups in shared-memory parallel settings. For example, on a graph with more than 82 million nodes and 3.6 billion edges, MSEIGS takes less than 3 hours on a single-core machine while Randomized SVD takes more than 6 hours, to obtain a similar approximation of the top-50 eigenvectors. Using 16 cores, we can reduce this time to less than 40 minutes.
4. Multi-Scale Link Prediction
An important problem for social network analysis is proximity estimation that infers the “closeness” of different users. Proximity measures quantify the interaction between users based on the structural properties of a graph, such as the number of common friends. A key application for proximity measure in social networks is link prediction, which is a core problem in social network analysis. Effective proximity measures, such as path based methods (e.g. Katz or rooted PageRank) are well known for their high computational complexity and memory usage. One approach is to perform dimensionality reduction on the original graph and then compute the proximity based on its low-rank approximation.
We propose a robust, flexible, and scalable framework for link prediction on social networks that we call, multi-scale link prediction (MSLP). MSLP exploits different scales of low-rank approximation of social networks by combining information from multiple levels in the hierarchy in an efficient manner. Higher levels in the hierarchy present a more global view while lower levels focus on more localized information.
MSLP works by first performing hierarchical clustering on the graph by utilizing a fast graph clustering algorithm, and then performing multiscale approximation based on the produced hierarchy. Specifically, we develop a fast tree-structured approximation algorithm that enables us to compute the subspace of a parent cluster quickly by using subspaces of its child clusters. Since different levels have different approximation, each level will give different approximated proximity scores. MSLP will combine approximated proximity scores from each level and make the final prediction based on the combined scores. As a result, MSLP captures both local and global information of the network. Experimental results on real-life datasets with more than a million nodes show the superior performance and scalability of our method.
Main Publications
- Fast Prediction for Large-Scale Kernel Machines (pdf, software)
C. Hsieh, S. Si, I. Dhillon.
In Neural Information Processing Systems (NIPS), pp. 3689-3697, December 2014. - Multi-Scale Spectral Decomposition of Massive Graphs (pdf, software)
S. Si, D. Shin, I. Dhillon, B. Parlett.
In Neural Information Processing Systems (NIPS), pp. 2798-2806, December 2014. - A Divide-and-Conquer Solver for Kernel Support Vector Machines (pdf, software, code)
C. Hsieh, S. Si, I. Dhillon.
In International Conference on Machine Learning (ICML), pp. 566-574, June 2014. - Memory Efficient Kernel Approximation (pdf, slides, software, code)
S. Si, C. Hsieh, I. Dhillon.
In International Conference on Machine Learning (ICML), pp. 701-709, June 2014. - Multi-Scale Link Prediction (pdf, software)
D. Shin, S. Si, I. Dhillon.
In ACM Conference on Information and Knowledge Management (CIKM), pp. 215-224, October 2012. - A Divide-and-Conquer Method for Sparse Inverse Covariance Estimation (pdf, software)
C. Hsieh, I. Dhillon, P. Ravikumar, A. Banerjee.
In Neural Information Processing Systems (NIPS), pp. 2339-2347, December 2012.