Abstract

Probabilistic modeling provides a principled framework for learning from data, with the key advantage of offering rigorous solutions for uncertainty quantification. In the era of big and complex data, there is an urgent need for new inference methods in probabilistic modeling to extract information from data effectively and efficiently. This talk will show how to do theoretically-guaranteed scalable and reliable inference for modern machine learning. In the first part, I will introduce a general and theoretically grounded framework to enable fast and asymptotically correct inference, with applications to Metropolis-Hastings and Gibbs sampling. In the second part, I will highlight the key challenges of probabilistic inference in deep learning, and present a novel approach for fast posterior inference of neural network weights. This method has achieved state-of-the-art results and has been regarded as one of the benchmarks in deep probabilistic modeling.

Attachment

Video Recording