Abstract

Robust optimization is a paradigm for guaranteeing solutions to optimization problems will not fail under small perturbations to problem data. As such, it provides opportunities to connect with generalization and statistical theory for estimation problems, and the field of distributionally robust optimization has begun to make some of these connections rigorous by building optimization procedures that directly model perturbations to the distribution underlying the data. In this talk, I will overview some of the history behind robust optimization, bringing us up to modern machine learning via connections with different types of robustness. These approaches have the potential to certify levels of performance within machine-learned systems, though currently many approaches remain a bit conservative for practice. I will include a few open directions for research as well.

Video Recording