Events
Spring 2022

Learning & Games Reading Group: Equilibrium Computation and Machine Learning

Tuesday, Apr. 5, 2022 11:00 am12:30 pm PDT

Add to Calendar

Parent Program: 
Speaker: 

Yujia Jin (Stanford) and Lu Jiang (Google)

Location: 

Calvin Lab Room 116

Title: A Near-Optimal Method for Minimizing the Maximum of N Convex Loss Functions

Abstract: In this talk, I will discuss a few recent advances on stochastic first-order methods for minimizing $\max_{i\in[N]}f_i(x)$ for convex, Lipschitz $f_1,\cdots,f_N$. This is a natural convex game-type objective prevalent in optimization and robust learning. We provide an algorithm with $\widetilde{O}(N\eps^{-2/3}+\eps^{-2})$ queries to first-order oracles for finding an eps-approximate solution. This improves upon the previous best bound of $O(N\eps^{-2})$ queries due to subgradient descent and matches a recent lower bound up to poly-logarithmic factors. The method builds upon the accelerated proximal-point framework with ball constraints and the multi-level Monte-Carlo technique for efficient stochastic convex optimization, which may be of independent interest.

This talk will cover a subset of the results in our recent papers: https://arxiv.org/abs/2105.01778https://arxiv.org/abs/2106.09481, based on joint work with Hilal Asi, Yair Carmon, Arun Jambulapati and Aaron Sidford.

Bio: Yujia Jin is a fourth-year PhD student in the Department of Management Science and Engineering at Stanford in the Operations Research group under the supervision of Aaron Sidford. She is broadly interested in optimization problems, sometimes in the intersection with machine learning theory and graph applications. Prior to coming to Stanford, she received her Bachelor's degree in Applied Math at Fudan University, where she was fortunate to work with Prof. Zhongzhi Zhang. From 2016 to 2018, she also worked in Research Institute for Interdisciplinary Sciences (RIIS) at SHUFE, where she was fortunate to be advised by Prof. Dongdong Ge.

===========================================

Title: Robust image synthesis models with GANs and transformers

Abstract: Deep image synthesis as a field has seen much progress in recent years, including Generative Adversarial Networks (GANs) and vision transformers (ViTs). This talk will discuss our recent works on learning the robust image synthesis models. Specifically, we will talk about

  1. learning a GAN model to overcome the limited training data (LeCAM) [1],
  2. incorporating ViT into GAN modeling and improving training instability (ViTGAN) [2], and
  3. learning a generative transformer model using mask modeling (MaskGIT) [3] and non-autoregressive decoding.

In addition, we will discuss how the image synthesis model could be used to improve the robust classification on ImageNet [4].

[1] Regularizing Generative Adversarial Networks under Limited Data, CVPR 2021

[2] ViTGAN: Training gans with vision transformers, ICLR 2022

[3] MaskGIT: Masked Generative Image Transformer, CVPR 2022

[4] Discrete Representations Strengthen Vision Transformer Robustness, ICLR 2022

Bio: Lu Jiang is a staff research scientist in Google Research and an adjunct faculty member at Carnegie Mellon University Language Technologies Institute. Lu's primary interests lie in the interdisciplinary field of Multimedia, Machine Learning, and Computer Vision, specifically including robust deep learning, content creation, and video understanding. He was a core contributor to the IARPA Aladdin project and helped create YouTube-8M Dataset and AutoML in Google. He served as an area chair for ACM Multimedia, AVSS, SPC for AAAI, an AI panelist for NSF SBIR/STTR.