The Fairness Cluster will bring together a variety of perspectives on defining and achieving fairness goals in automated decision-making systems. Such systems are commonly used for binary classification tasks — predicting recidivism, creditworthiness, hirability, etc., of individuals. Individual fairness notions demand that 'similar individuals' be treated similarly by the classification system. Group fairness notions seek to achieve some measure of statistical parity for protected groups vis-à-vis the general population. To overcome shortcomings in these definitions, intermediate notions of fairness such as multi-calibration and multi-metric fairness have been defined to protect all sufficiently large, computable groups. Much work needs to be done in fine-tuning and applying these definitions to new scenarios.
While we have a good understanding of fairness for one binary classifier, real-world systems involve multiple classifiers classifying individuals in parallel (college admissions, ads shown), or in a pipeline (college admission followed by employment). Work on developing appropriate notions of fairness in these settings is in its infancy and will be further developed by this program.
In all these settings, we seek not only to design fair(er) decision procedures, but also to understand computational and informational limitations that prevent us from doing so. Such negative results tell us what assumptions about the model need to change in order to achieve fairness, and drive us to define approximate notions of fairness that can be achieved.
We will also view fairness through an economic lens, understanding the causes for rational agents to be unfair and the costs of incentivizing such agents to behave fairly.
Long-term visitors to the cluster will primarily be theoretical computer scientists who have been working on such questions. The cluster will include two workshops. The first will bring together scholars from the humanities, social sciences, law, and medicine, to discuss phenomena of interest to their fields from the point of view of fairness. This will provide theoretical computer scientists with a rich source of important problems to think about. The second workshop will be more typical - with presentations by long-term visitors and people invited just for this workshop, on technical results on fairness.
sympa [at] lists [dot] simons [dot] berkeley [dot] edu (body: subscribe%20fairness2019announcements%40lists.simons.berkeley.edu) (Click here to subscribe to our announcements email list for this program).
Long-Term Participants (including Organizers):
Visiting Graduate Students and Postdocs:
Those interested in participating in this program should send an email to the organizers at this fairness2019 [at] lists [dot] simons [dot] berkeley [dot] edu (at this address).