Abstract
Data-driven algorithms have increasingly wide and deep reach into our lives, but the methods used to design and govern these algorithms are outdated. In this talk, we discuss two works, one of which focuses on algorithm design and the other on algorithm governance. The first work asks whether common assumptions used to design recommendation algorithms---such as those behind Yelp, Netflix, Facebook, and Grammarly---hold up in practice. In particular, recommendation platforms generally assume that users have fixed preferences and report their preferences truthfully. In reality, however, users can adapt and strategize, and failing to acknowledge the agency of users can hurt both the user and platform. In our work, we provide a game-theoretic perspective of recommendation and study the role of *trust* between a user and their platform. The second work studies exceptions in data-driven decision-making. Exceptions to a rule are decision subjects for whom the rule is unfit. Because averages are so fundamental to machine learning, data-driven exceptions are inevitable. The problem is: data-driven exceptions arise non-intuitively, making it difficult to identify and protect individuals who, through no fault of their own, fall through the cracks. Our work lays out a framework for legally protecting individuals subject to high-risk, data-driven decisions.