Many recent machine learning approaches have moved from an optimization perspective to an "equilibration" perspective, where a good model is framed as the equilibrium of a game, as opposed to the minimizer of an objective function. Examples include generative adversarial networks, adversarial robustness and fairness in machine learning. While recent years have seen great progress in non-convex optimization, with celebrated methods such Stochastic Gradient Descent, Adagrad, and Adam driving much of the progress in Deep Learning, our ability to solve min-max optimization problems and finding equilibria in smooth games remains rather poor (especially when payoffs are non-convex/concave or noisy). The workshop will bring together practitioners from these fields to discuss the practical challenges, as well as researchers working on the foundations of stochastic optimization and algorithmic game theory.
Further details about this workshop will be posted in due course. Enquiries may be sent to the organizers workshop-games1 [at] lists.simons.berkeley.edu (at this address).
Registration is required to attend this workshop. Space may be limited, and you are advised to register early. The link to the registration form will appear on this page approximately 10 weeks before the workshop. To submit your name for consideration, please register and await confirmation of your acceptance before booking your travel.