Artificially intelligent systems extrapolate from historical training data. While the training process is robust to “noisy” data, systematically biased data will inexorably lead to biased systems. The emerging field of algorithmic fairness seeks interventions to blunt the downstream effects of data bias. Initial work has focused on classification and prediction algorithms.
This cross-cutting workshop will examine the sources and nature of racial bias in a range of settings such as genomics, medicine, credit systems, bail and probate calculations, and automated surveillance. We will survey state-of-the-art algorithmic literature, and lay a more comprehensive intellectual foundation for advancing algorithmic fairness.
All events take place in the Calvin Lab auditorium.
Further details about this workshop will be posted in due course. Enquiries may be sent to simonsevents [at] berkeley [dot] edu (subject: FAIR19-1%20Web%20Inquiry) .
Registration is required to attend this workshop. Space may be limited, and you are advised to register early. The link to the registration form will appear on this page approximately 10 weeks before the workshop. To submit your name for consideration, please register and await confirmation of your acceptance before booking your travel.