Abstract: We study the general class of estimators for graphical model structure based on optimizing l1-regularized approximate log-likelihood, where the approximate likelihood uses tractable variational approximations of the partition function. We provide a message-passing algorithm that directly computes the l1 regularized approximate MLE. Further, in the case of certain reweighted entropy approximations to the partition function, we show that surprisingly the l1 regularized approximate MLE estimator has a closed-form, so that we would no longer need to run through many iterations of approximate inference and message-passing. Lastly, we analyze this general class of estimators for graph structure recovery, or its sparsistency, and show that it is indeed sparsistent under certain conditions.
- On the Use of Variational Inference for Learning Discrete Graphical Models (pdf)
E. Yang, P. Ravikumar.
In International Conference on Machine Learning (ICML), 2011. (Oral)