Abstract

We study distributionally robust formulations for stochastic optimization problems and empirical risk minimization. By considering a worst-case notion of risk in non-parametric uncertainty sets around the empirical distribution, including Wasserstein balls and f-divergence balls, we are able to provide certificates of generalization and robustness for optimization procedures. As part of this, we can give principled approaches for selecting confidence regions for stochastic optimization problems, and we can regularize optimization problems by their variance, enabling computationally efficient trading between estimation and approximation error. We also develop efficient solution methods, and show applications to convex and non-convex optimization.

Video Recording