Description

Gradient descent algorithms and their noisy variants, such as the Langevin dynamics or multipass stochastic gradient descent, are at the center of attention in machine learning. Yet their behavior remains perplexing, in particular in the high-dimensional nonconvex setting. In this talk, I will present several high-dimensional and (mostly) nonconvex statistical learning problems in which the performance of gradient-based algorithms can be analyzed down to a constant. The common point of these settings is that the data come from a probabilistic generative model leading to problems for which, in the high-dimensional limit, statistical physics provides exact closed solutions for the performance of the gradient-based algorithms. The covered settings include the spiked mixed matrix-tensor model, the perceptron, and phase retrieval.

If you require accommodation for communication, please contact our Access Coordinator at simonsevents [at] berkeley.edutarget="_blank" with as much advance notice as possible.

YouTube Video
Remote video URL