Abstract

Abstract: A key challenge in modern statistics is to ensure statistically-valid inferences across diverse target populations from a fixed source of training data.  Statistical techniques that guarantee this type of adaptability not only make statistical conclusions more robust to sampling bias, but can also extend the benefits of evidence-based decision-making to communities that do not have the resources to collect high-quality data or computationally-intensive estimation on their own.  In this talk, we describe a surprising technical connection between the statistical inference problem and multi-calibration, a technique developed in the context of algorithmic fairness.

Concretely, our approach derives a correspondence between the fairness goal *to protect subpopulations from miscalibrated predictions*, and the statistical goal *to ensure unbiased estimates on downstream target populations*.  Exploiting this connection, we derive a single-source estimator that provides inferences that are *universally-adaptable* to any downstream target population. The performance of the estimator is comparable to the performance of propensity score reweighting, a widely-used technique that explicitly models the underlying source-target shift, *for every target*.

Joint work with Shafi Goldwasser, Christoph Kern, Frauke Kreuter, and Omer Reingold.