Abstract

A key challenge in modern statistics is to ensure statistically-valid inferences across diverse target populations from a fixed source of training data. Statistical techniques that guarantee this type of adaptability not only make statistical conclusions more robust to sampling bias, but can also extend the benefits of evidence-based decision-making to communities that do not have the resources to collect high-quality data or computationally-intensive estimation on their own. In this talk, we describe a surprising technical connection between the statistical inference problem and multicalibration, a technique developed in the context of algorithmic fairness. Exploiting this connection, we derive a single-source estimator that provides inferences that are *universally-adaptable* to any downstream target population. The performance of the estimator is comparable to the performance of propensity score reweighting, a widely-used technique that explicitly models the underlying source-target shift, *for every target*.

We will discuss universal adaptability for prediction tasks, and its extensions to treatment effect. Finally, we will speculate on possible connections between multicalibration and causality.

Mostly based on joint work with Michael Kim, Christoph Kern, Shafi Goldwasser and Frauke Kreuter.

Attachment

Video Recording