Abstract

Min-max optimization is emerging as a key framework for analyzing problems of robustness to strategically and adversarially generated data.  In this talk, the focus will be on a class of convex-concave min-max problems possessing a finite sum structure, and we will discuss a zero-th order variant of Optimistic Gradient Descent-Ascent (OGDA) that exploits random reshuffling to obtain fast convergence. The algorithm enjoys the same convergence rate as that of zeroth-order algorithms for convex minimization problems. This particular game structure arises in applications wherein a decision-maker seeks to minimize its empirical risk given that the data distribution it is optimizing over is dependent on its action either due to adversarial or otherwise strategic behavior. In much of the decision-dependent optimization literature, a model for the adversarial/strategic behavior is assumed known. Yet in practice, this is unrealistic. Without precise knowledge of the decision-dependence, gradient information is not readily available. Moreover, to build in robustness to model misspecification we formulate the learning problem as a distributionally robust decision-dependent risk minimization problem.  Leveraging a clever transformation we show that it is possible to obtain an equivalent finite-dimensional convex-concave minmax problem. We will also demonstrate through illustrative examples how OGDA with random-reshuffling outperforms existing methods for such problems including strategic classification and prediction. The talk will be concluded with open questions and discussion on interesting directions of future work for this area. 

Video Recording