Pathfinder: Quasi-Newton Variational Inference
The speaker will introduce the Pathfinder variational inference algorithm, which was motivated by finding good initializations for Markov chain Monte Carlo (i.e., solving the “burn-in” problem). It works by running quasi-Newton optimization (specifically, L-BFGS) on the target
posterior (not the stochastic ELBO, as in other black-box variational infernece algorithms). At each iteration of optimization, Pathfinder defines a variational approximation to the posterior, in the form of a multivariate normal distribution taking the low-rank plus diagonal inverse Hessian from the optimizer as covariance. It then selects the approximation with the lowest KL-divergence to the true posterior. Multi-path Pathfinder runs multiple instances of Pathfinder in parallel and then uses importance resampling to produce a final set of draws. The single-path algorithm provides much better approximations (measured by Wasserstein distance or KL-divergence) than the previous state-of-the-art mean-field or full-rank black box variational
inference schemes, and the multi-path algorithm is much better again for posteriors with multiple modes or complex geometry. The computational bottleneck is evaluating KL-divergence through the evidence lower bound (ELBO), but this step is embarassingly parallelizable. Even without parallelization, Pathfinder is one to three orders of magnitude faster than the state of the art black box variational inference or using the no-U-turn Hamiltonian Monte Carlo sampler for warmup. It is also much more robust. We will show the
results of evaluating on dozens of different models in the posteriordb test suite and also a range of high-dimensional and multimodal problems. This is joint work with Lu Zhang (first author who did most of the hard work), Aki Vehtari, and Andrew Gelman.
Paper: https://arxiv.org/abs/2108.03782
R implementation and evaluation: https://github.com/LuZhangstat/Pathfinder
C++ implementation for Stan: https://github.com/stan-dev/stan/pull/3123