Abstract

The widespread application of machine learning in practice necessitates the design of robust learning algorithms that can withstand unpredictable and even adversarial environments. The holy grail of robust algorithm design is to provide performance guarantees that do not diminish in presence of all-powerful adversaries who know the algorithm and adapt to its decisions. However, for most fundamental problems in machine learning and optimization, such guarantees are impossible. This indicates that new models are required to provide rigorous guidance on the design of robust algorithms. This talk goes beyond the worst-case analysis and presents such models and perspectives for fundamental problems in machine learning and optimization. I will present general-purpose techniques that lead to strong robustness guarantees in practical domains and for a wide range of applications, such as online learning, differential privacy, discrepancy theory, and learning-augmented algorithm design.

Video Recording