Abstract

Current machine learning (ML) systems are remarkably brittle, raising serious concerns about their deployment in safety-critical applications like self-driving cars and predictive healthcare. In such applications, models could encounter test distributions that differ wildly from the training distributions. Trustworthy ML thus requires strong robustness guarantees from learning, including robustness to worst-case distribution shifts. Robustness to worst-case distribution shifts raises several computational and statistical challenges over ‘standard’ machine learning. In this talk, I will present two formal settings of worst-case distribution shifts motivated by adversarial attacks on test inputs and presence of spurious correlations like image backgrounds. Empirical observations demonstrate (i) an arms race between attacks and existing heuristic defenses necessitating provable guarantees much like cryptography (ii) increased sample complexity of robust learning (iii) resurgence of the need for regularization in robust learning. We capture each of these observations in simple theoretical models that nevertheless yield principled and scalable approaches to overcome the hurdles in robust learning, particularly via the use of unlabeled data.

Video Recording