Description

Title: Adapting to Failure of the IID Assumption
Abstract: A fundamental assumption in statistical learning is that data are independent and identically distributed (IID), yet this assumption is (a) often unverifiable and (b) intuitively false in many real-world settings. As we have seen this semester, there is an extensive amount of work studying the adversarial setting, where no assumption is made that tomorrow’s learning task will look like today’s. However, often a completely adversarial world is an overly pessimistic setting to study, and hence a natural question arises: Can we design methods that perform as well as possible relative to whether the world is more IID or adversarial, without requiring this knowledge in advance?  In this talk, I will discuss my work addressing this question in the setting of sequential prediction with expert advice. We define a continuous spectrum of relaxations of the IID assumption for prediction problems with sequential data, with IID data at one extreme and adversarial mechanisms at the other. We develop methods for prediction with sequential data that adapt to the level of failure of the IID assumption. We quantify the difficulty of prediction in all scenarios along the spectrum we introduce, demonstrate that the prevailing methods do not adapt to this spectrum, and present new methods that are adaptively minimax optimal. More broadly, this work shows that it is possible to develop methods that are both adaptive and robust: they realize the benefits of the IID assumption when it holds, without ever compromising performance when the IID assumption fails, and without having to know the degree to which the IID assumption fails in advance.This talk is based on the following two research papers:

  1. https://arxiv.org/abs/2007.06552
  2. https://proceedings.neurips.cc/paper/2021/hash/dcd2f3f312b6705fb06f4f9f1b55b55c-Abstract.html