Search Machine Learning Repository: Fast Dual Variational Inference for Non-Conjugate Latent Gaussian Models
Authors: Mohammad E. Khan, Aleksandr Aravkin, Michael Friedlander and Matthias Seeger
Conference: Proceedings of the 30th International Conference on Machine Learning (ICML-13)
Year: 2013
Pages: 951-959
Abstract: Latent Gaussian models (LGMs) are widely used in statistics and machine learning. Bayesian inference in non-conjugate LGM is difficult due to intractable integrals involving the Gaussian prior and non-conjugate likelihoods. Algorithms based on Variational Gaussian (VG) approximations are widely employed since they strike a favorable balance between accuracy, generality, speed, and ease of use. However, the structure of optimization problems associated with them remains poorly understood, and standard solvers take too long to converge. In this paper, we derive a novel dual variational inference approach, which exploits the convexity property of the VG approximations. The implications of our approach is that we obtain an algorithm that solves a convex optimization problem, reduces the number of variational parameters, and converges much faster than previous methods. Using real world data, we demonstrate these advantages on a variety of LGMs including Gaussian process classification and latent Gaussian Markov random fields.
[pdf] [BibTeX]

authors venues years
Suggest Changes to this paper.
Brought to you by the WUSTL Machine Learning Group. We have open faculty positions (tenured and tenure-track).