Description

As AI systems are integrated into high stakes social domains such as transportation and healthcare, researchers now examine how to design and operate them in a safe and ethical manner. However, the harms caused by systems to stakeholders in complex social contexts and how to address these remain contested. In this talk, we examine the normative indeterminacy in debates about the safety and ethical behavior of AI systems. We show how dealing with indeterminacy across diverse stakeholders cannot be captured by mathematical uncertainty alone, instead requiring acknowledgment of the politics of development as well as the context of deployment. Drawing from two case studies, we formulate normative indeterminacy in terms of sociotechnical challenges, captured in four key dilemmas in the problematization, featurization, optimization, and integration stages of AI system development. The resulting framework of Hard Choices in Artificial Intelligence (HCAI) empowers developers to navigate these dilemmas by 1) cultivating distinct forms of sociotechnical judgment that correspond to each stage; 2) securing mechanisms for dissent that ensure safety issues are exhaustively addressed by providing stakeholders with continuous channels to advocate for their concerns. 

If you require accommodation for communication, please contact our Access Coordinator at simonsevents [at] berkeley.edutarget="_blank" with as much advance notice as possible.

YouTube Video
Remote video URL

All scheduled dates:

Upcoming

No Upcoming activities yet