The seminal paper of Jordan, Kinderlehrer, and Otto has profoundly reshaped our understanding of sampling algorithms. What is now commonly known as the JKO scheme interprets the evolution of marginal distributions of a Langevin diffusion as a gradient flow of a Kullback-Leibler (KL) divergence over the Wasserstein space of probability measures. This optimization perspective on Markov chain Monte Carlo (MCMC) has not only renewed our understanding of algorithms based on Langevin diffusions, but has also fueled the discovery of new MCMC algorithms inspired by the diverse and powerful optimization toolbox.
The goal of this workshop is to bring together researchers from various fields (theoretical computer science, optimization, probability, statistics, and calculus of variations) to interact around new ideas that exploit this powerful framework.
This event will be held in person and virtually
If you are interested in joining this workshop, please see the Participate page.
Registration is required to attend this workshop. Space may be limited, and you are advised to register early. To submit your name for consideration, please register and await confirmation of your acceptance before booking your travel.