The Fairness Cluster will bring together a variety of perspectives on defining and achieving fairness goals in automated decision-making systems. Such systems are commonly used for binary classification tasks - predicting recidivism, creditworthiness, hirability, etc., of individuals. Individual fairness notions demand that `similar individuals’ be treated similarly by the classification system. Group fairness notions seek to achieve some measure of statistical parity for protected groups vis-à-vis the general population. To overcome shortcomings in these definitions, intermediate notions of fairness such as multi-calibration and multi-metric fairness have been defined to protect all sufficiently large, computable groups. Much work needs to be done in fine-tuning and applying these definitions to new scenarios.
While we have a good understanding of fairness for one binary classifier, real-world systems involve multiple classifiers classifying individuals in parallel (college admissions, ads shown), or in a pipeline (college admission followed by employment). Work on developing appropriate notions of fairness in these settings is in its infancy and will be further developed by this program.
In all these settings, we seek not only to design fair(er) decision procedures, but also to understand computational and informational limitations that prevent us from doing so. Such negative results tell us what assumptions about the model need to change in order to achieve fairness, and drive us to define approximate notions of fairness that can be achieved.
We will also view fairness through an economic lens, understanding the causes for rational agents to be unfair and the costs of incentivizing such agents to behave fairly.
Long-term visitors to the cluster will primarily be theoretical computer scientists who have been working on such questions. The cluster will include two workshops. The first will bring together scholars from the humanities, social sciences, law, and medicine, to discuss phenomena of interest to their fields from the point of view of fairness. This will provide theoretical computer scientists with a rich source of important problems to think about. The second workshop will be more typical - with presentations by long-term visitors and people invited just for this workshop, on technical results on fairness.
sympa [at] lists [dot] simons [dot] berkeley [dot] edu (body: subscribe%20fairness2019announcements%40lists.simons.berkeley.edu) (Click here to subscribe to our announcements email list for this program).
Cynthia Dwork (Harvard University), Sampath Kannan (University of Pennsylvania), Jamie Morgenstern (University of Pennsylvania)
List of participants (tentative list, including organizers):
Cynthia Dwork (Harvard University & Microsoft Research), Sorelle Friedler (Haverford College), Shafi Goldwasser (Simons Institute), Swati Gupta (Georgia Tech), Sampath Kannan (University of Pennsylvania), Aleksandra Korolova (University of Southern California), Katrina Ligett (Hebrew University of Jerusalem), Jamie Morgenstern (University of Pennsylvania), Toni Pitassi (University of Toronto), Omer Reingold (Stanford University), Guy Rothblum (Weizmann Institute), Nati Srebro Bartom (Toyota Technological Institute at Chicago), Rakesh Vohra (University of Pennsylvania), Steven Wu (University of Minnesota Twin Cities), Richard Zemel (University of Toronto), James Zou (Stanford University)
Visiting Graduate Students and Postdocs:
Yahav Bechavod (Hebrew University), Boriana Gjura (Harvard University), Cyrus Hettle (Georgia Institute of Technology), Lily Hu (Harvard University), Christina Ilvento (Harvard University), Christopher Jung (University of Pennsylvania), Michael Kim (Stanford University), Neil Lutz (University of Pennsylvania), Charlie Marx (Haverford College), Yonadav Shavit (Harvard University), Yuanyuan (Chloe) Yang (Georgia Institute of Technology), Gal Yona (Weizmann Institute)
Those interested in participating in this program should send an email to the organizers at this fairness2019 [at] lists [dot] simons [dot] berkeley [dot] edu (at this address).