Abstract
Robustness to changes in data is one of the main challenges faced by sequential machine learning and decision-making algorithms. Yet, most efficient and highly optimized deployed algorithms today were designed to work well on fixed data sets and ultimately fail when data evolves in unpredictable or adversarial ways. Rather than designing robust algorithms from scratch, in this talk, I will show how we can directly tap into existing non-robust optimization algorithms to design sequential learning algorithms that are robust to adversaries.
More formally, we will consider the design of online no-regret algorithms that are computationally efficient, assuming access to an offline Empirical Risk Minimization oracle. We will see general approaches and principles for designing such algorithms. I will introduce a general-purpose oracle-efficient algorithm that is no-regret whenever an adaptive adversary is not fully worst-case (in the style of Smoothed Analysis). I will also compare these results to the fully worst-case adversaries, where oracle-efficient algorithms do not exist in full generality, but they do exist for a large class of economic settings such as in online auction design.