![Summer Cluster in Interpretable Machine Learning-hi-res logo](/sites/default/files/styles/workshop_banner_sm_1x/public/2023-03/Interpretable%20Machine%20Learning-hi-res.jpg?h=6dcb57f1&itok=aGUi-aVS)
Note: The event time listed is set to Pacific Time.
In recent years, there has been a proliferation of papers in the algorithmic fairness literature proposing various technical definitions of algorithmic bias and methods to mitigate bias. Whether these algorithmic bias mitigation methods would be permissible from a legal perspective is a complex but increasingly pressing question at a time where there are growing concerns about the potential for algorithmic decision-making to exacerbate societal inequities. In particular, there is a tension around the use of protected class variables: most algorithmic bias mitigation techniques utilize these variables or proxies, but anti-discrimination doctrine has a strong preference for decisions that are blind to them. This talk will discuss the extent to which technical approaches to algorithmic bias are compatible with U.S. anti-discrimination law and recommend a path toward greater compatibility through providing causal interpretations.
All scheduled dates:
Upcoming
No Upcoming activities yet