Abstract

Algorithms make predictions about people constantly.  The spread of such prediction systems has raised concerns that algorithms may exhibit unfair discrimination, especially against individuals from marginalized groups.  We overview a notion of algorithmic fairness called Multicalibration (Hebert-Johnson,K.,Reingold,Rothblum'18), which formalizes the goals of fair prediction through the lens of complexity theory.  Multicalibration requires that algorithmic predictions be well-calibrated, not simply overall, but simultaneously over a rich collection of subpopulations.  This ``multi-group'' approach strengthens the guarantees of group fairness definitions, without incurring the costs (statistical and computational) associated with individual protections.

In this tutorial, we begin with multicalibration, discussing its fairness properties and how to learn a multicalibrated predictor.  Then, we turn our attention to two more-recent and related investigations.  Specifically, we highlight Omniprediction (Gopalan,Kalai,Reingold,Sharan,Wieder'22) which establishes the (implicit) loss minimization guarantees of multicalibration, and Outcome Indistinguishability (Dwork,K.,Reingold,Rothblum,Yona'21) which gives a characterization of multicalibration through the lens of computational indistinguishability.  We show tight connections between each of these notions, demonstrating multicalibration's versatility as a modern learning paradigm.

Video Recording