Abstract
Probabilistic programming refers to the idea of using standard programming constructs for specifying probabilistic models and employing generic inference algorithms for answering various queries on these models. Although this idea itself is not new and was, in fact, explored by several programming-language researchers in the early 2000, it is only in the last few years that probabilistic programming has gained a large amount of attention among researchers in machine learning and programming languages, and that serious probabilistic programming languages (such as Anglican, Church, Infer.net, PyMC, Stan, and Venture) started to appear and have been taken up by a nontrivial number of users.
In this talk, I will explain a project that tries to address one of the most important challenges in probabilistic programming: how to build an efficient inference algorithm? The concrete goal of the project is to develop efficient variational inference algorithms that work for any probabilistic models written in an expressive probabilistic programming language such as Anglican and that, at the same time, are able to exploit the structures of these models automatically. I and my colleagues are far from achieving our goal. So far we have tried to develop a black-box variational inference algorithm for general probabilistic programs following Ranganath et al.'s guideline. During the talk, I will explain very preliminary but slightly unexpected results and observations of this project. This is joint work Raphael Monat from ENS Lyon and Yee Whye Teh from Oxford.