Abstract

In many applications of societal concern, algorithmic outputs are not decisions, but only predictive recommendations provided to humans who ultimately make the decision. However, current technical and ethical analyses of such algorithmic systems tend to treat them as autonomous entities. In this talk, I draw on works in group dynamics and decision-making to show how viewing algorithmic systems as part of human-AI teams can change our understanding of key characteristics of these systems—with a focus on accuracy and interpretability—and any potential trade-off between them. I will discuss how this change of perspective (i) can guide the development of functional and behavioral measures for evaluating the success of interpretability methods and (ii) challenge existing ethical and policy proposals about the relative value of interpretability.

Video Recording