Abstract
While there has been incredible progress in convex and nonconvex minimization, many problems in machine learning are in need of efficient algorithms to solve min-max optimization problems. However, unlike minimization, where algorithms can always be shown to converge to some local minimum, there is no notion of local equilibrium in min-max optimization that exists for general nonconvex-nonconcave functions. We will present new notions of local equilibria that are guaranteed to exist, efficient algorithms to compute them, and implications to GANs.
Based on the following joint works with Oren Mangoubi and Sushant Sachdeva:
https://arxiv.org/abs/2006.12363 and https://arxiv.org/abs/2006.12376