Abstract

We will consider the following question: can we optimize decisions on models learned from data and achieve desirable outcomes? We will discuss this question in a framework we call optimization from samples: we are given samples of function values (model) and our goal is to (approximately) optimize the function (i.e. make a good decision). On the negative front we show that there are classes of functions that are heavily used in applications which are both (approximately) optimizable and statistically learnable, but cannot be optimized within any reasonable approximation guarantee when sampled from data. On a positive front, we show natural conditions under which optimization from samples is achievable. This is joint work with Eric Balkanski and Aviad Rubinstein.

Video Recording