Model Selection in Gaussian Graphical Models: High-Dimensional Consistency of l1-Regularized MLE

Pradeep Ravikumar, Garvesh Raskutti, Martin Wainwright, Bin Yu

Abstract:   We consider the problem of estimating the graph structure associated with a Gaussian Markov random field (GMRF) from i.i.d. samples. We study the performance of study the performance of the ℓ1-regularized maximum likelihood estimator in the high-dimensional setting, where the number of nodes in the graph p, the number of edges in the graph s and the maximum node degree d, are allowed to grow as a function of the number of samples n. Our main result provides sufficient conditions on (n, p, d) for the ℓ1-regularized MLE estimator to recover all the edges of the graph with high probability. Under some conditions on the model covariance, we show that model selection can be achieved for sample sizes n = Ω(d 2 log(p)), with the error decaying as O(exp(−clog(p))) for some constant c. We illustrate our theoretical results via simulations and show good correspondences between the theoretical predictions and behavior in simulations.

Download: pdf

Citation

  • Model Selection in Gaussian Graphical Models: High-Dimensional Consistency of l1-Regularized MLE (pdf, software)
    P. Ravikumar, G. Raskutti, M. Wainwright, B. Yu.
    In Neural Information Processing Systems (NIPS), December 2008.

    Bibtex: