Sina Fazelpour (Carnegie Mellon University)
Note: The event time listed is set to Pacific Time.
The increasing use of predictive algorithms in consequential decision-making has made it imperative to carefully evaluate their impact and to structure appropriate regulatory responses. However, the standard approach to evaluating AI-assisted decisions has critical shortcomings, as it fails to account for the functional role of the algorithm as a tool for decision support and the dynamics of deployment. We argue that reliable evaluation of AI-assisted decisions requires broadening the unit of analysis from the algorithmic tool in abstraction to the socio-technological system in which the tool is embedded. Adopting this broader perspective, we propose four guidelines for evaluating AI decision support tools in a way that better accounts for the nature of the deployment of these systems, offering an agenda for a multi-disciplinary research programme.
To register for this event and receive the Zoom link, please email organizers bendavid.shai [at] gmail.com (subject: Inquiry%20to%20register%20for%20Interpretable%20Machine%20Learning%20event%20June%2029%2C%202020) (Shai Ben-David) or ruth.urner [at] gmail.com (subject: Inquiry%20to%20register%20for%20Interpretable%20Machine%20Learning%20event%20June%2029%2C%202020) (Ruth Urner).