Algorithms and Race | Polylogues

Remote video URL
In this time of reckoning with systemic injustice and racism, we are sharing an interview from our archive on the topic of algorithms and race.

In this episode of Simons Institute PolyloguesDeirdre Mulligan (UC Berkeley) interviewed legal scholar Patricia Williams (Columbia Law School) and computer scientist Cynthia Dwork (Harvard University), organizers of Wrong at the Root, a workshop on racial bias in algorithms held at the Simons Institute in June 2019.

The workshop examined the sources and nature of racial bias in a range of settings, such as genomics, medicine, credit systems, bail and probation calculations, and automated surveillance. 

Artificially intelligent systems extrapolate from historical training data. Systematically biased data will inevitably lead to biased systems. The emerging field of algorithmic fairness seeks interventions to blunt the downstream effects of data bias. Initial work has focused on classification and prediction algorithms. This crosscutting workshop surveyed state-of-the-art algorithmic literature and laid a more comprehensive intellectual foundation for advancing algorithmic fairness.

The workshop was structured around seven panel discussions on the following topics: genomics, medicine, epidemiology, and data interpretation; science vs. pseudoscience; policies, politics, publics; carcerality; governance; causality; and individuals vs. groups and the value of rich data. 

Wrong at the Root was a component of the Simons Institute’s Summer 2019 research cluster on algorithmic fairness. The Fairness cluster brought together a variety of perspectives on defining and achieving fairness goals in automated decision-making systems for predicting recidivism, creditworthiness, hireability, etc. of individuals. 

Machine learning and data analysis have enjoyed tremendous success in a broad range of domains. These advances hold the promise of great benefits to individuals, organizations, and society as a whole. Undeniably, algorithms are informing decisions that affect all aspects of life, from news article recommendations to criminal sentencing decisions to health care diagnostics. This progress, however, raises (and is impeded by) a host of concerns regarding the societal impact of computation. 

The Fairness cluster addressed the fundamental question of algorithmic fairness: do algorithms discriminate, or do they make more-equitable decisions than humans? Algorithms can propagate and possibly even amplify existing biases unless measures are taken to prevent this from happening. Addressing discrimination by algorithms (as well as other societal concerns) not only is mandated by law and ethics but is essential to maintaining the public trust in the computation-driven revolution we are experiencing.

The multidisciplinary study of fairness is not new: philosophers, legal experts, economists, statisticians, social scientists, and others have been concerned with fairness for generations. Nevertheless, the scale of decision-making in the age of big data, as well as the computational complexities of algorithmic decision-making, implies that computer scientists must take an active part in this research endeavor. Indeed, computer scientists are rising to the challenge, as manifested by an explosion of research in recent years. Many of the approaches discussed in the cluster, while influenced by other disciplines, rely on insights from computer science. 

Models and definitions for algorithmic fairness have been the topic of much research in recent years and were central to discussion in the cluster. For example, researchers in the cluster addressed notions of fairness pertaining to large populations in comparison with individual notions, as well as promising definitions that lie in between. Individual fairness notions demand that similar individuals be treated similarly by the classification system. Group fairness notions seek to achieve some measure of statistical parity for protected groups vis-à-vis the general population. To overcome shortcomings in these definitions, intermediate notions of fairness such as multicalibration and multimetric fairness have been defined to protect all sufficiently large, computable groups. Much work needs to be done in fine-tuning and applying these definitions to new scenarios. The cluster also viewed fairness through an economic lens, understanding the causes for rational agents to be unfair and the costs of incentivizing such agents to behave fairly.

 

 

Related articles

,