Abstract

From education to lending, consequential decisions in society increasingly rely on data-driven algorithms. Yet the long-term impact of algorithmic decision making is largely ill-understood, and there exist serious challenges to ensuring equitable benefits, in theory and practice. While the subject of algorithmic fairness has received much attention, algorithmic fairness criteria have significant limitations as tools for promoting equitable benefits. In this talk, we review various fairness desiderata in machine learning and when they may be in conflict. We then introduce the notion of delayed impact---the welfare impact of decision-making algorithms on populations after decision outcomes are observed, motivated, for example, by the change in average credit scores after a new loan approval algorithm is applied. We demonstrate that several statistical criteria for fair machine learning, if applied as a constraint to decision-making, can result in harm to the welfare of a disadvantaged population. We end by considering future directions for fairness in machine learning that evince a holistic and interdisciplinary approach.

Attachment

Video Recording