Abstract

Langevin Monte Carlo (LMC) is an iterative algorithm used to generate samples from a distribution that is known only up to a normalizing constant. The nonasymptotic dependence of its mixing time on the dimension and target accuracy is understood mainly in the setting of smooth (gradient-Lipschitz) log-densities, a serious limitation for applications in machine learning. We remove this limitation by providing polynomial-time convergence guarantees for a variant of LMC in the setting of non-smooth log-concave distributions. At a high level, our results follow by leveraging the implicit smoothing of the log-density that comes from a small Gaussian perturbation that we add to the iterates of the algorithm and while controlling the bias and variance that are induced by this perturbation. Based on joint work with Niladri Chatterji, Michael I. Jordan, and Peter L. Bartlett.