Abstract

As machine learning algorithms are increasingly being deployed for consequential decision-making (e.g., loan approvals, college admissions, probation decisions, etc.) humans are trying to strategically change the data they feed to these algorithms in an effort to obtain better decisions for themselves. If the deployed algorithms do not take these incentives into account they risk creating policy decisions that are incompatible with the original policy’s goal. In this talk, I will give an overview of my work on Incentive-Aware Machine Learning for Decision Making, which studies the effects of strategic behavior both on institutions and society as a whole and proposes ways to robustify machine learning algorithms for strategic individuals. I will look at the problem from a societal lens and discuss the tension that arises between having decision-making algorithms that are fully transparent and incentive-aware.

Video Recording