Abstract

Given a desired target distribution and an initial guess of that distribution, composed of finitely many samples, what is the best way to evolve the locations of the samples so that they accurately represent the desired distribution? A classical solution to this problem is to allow the samples to evolve according to Langevin dynamics, a stochastic particle method for the Fokker-Planck equation. In today’s talk, I will contrast this classical approach with a deterministic particle method corresponding to the porous medium equation. This method corresponds exactly to the mean-field dynamics of training a two layer neural network for a radial basis function activation function. We prove that, as the number of samples increases and the variance of the radial basis function goes to zero, the particle method converges to a bounded entropy solution of the porous medium equation. As a consequence, we obtain both a novel method for sampling probability distributions as well as insight into the training dynamics of two layer neural networks in the mean field regime.

Attachment

Video Recording