Abstract

What will happen if we introduce random coordinate descent (RCD) to Langevin Monte Carlo (LMC)? On one hand, per iteration, one only computes one partial derivative instead of the full gradient, potentially saving some cost, while on the other, the variance in the random process enters the error, requiring more iterations for a good convergence. We discuss several combinations of RCD and LMC, and present a few sampling schemes that can outperform the classical LMC in regimes when some assumptions on the conditioning of objective functions hold. This is a joint work with Zhiyan Ding, Jianfeng Lu and Steve Wright.

Video Recording