Abstract

Judgments from humans and one or more artificial agents are increasingly combined for decisions, with the expectation that the decisions of the team will outperform those of individual agents. However, simple approaches to providing human decision makers with AI support and evaluating team performance can lead to apparent failures in team performance even when the agents are thought to possess complementary information. I’ll discuss measurement frameworks we’ve developed that apply statistical decision theory and information economics to address questions at the human-agent interface, including how to evaluate when a decision-maker appropriately relies on model predictions, when a human or AI agent could better exploit available contextual information, and how to evaluate (and design) explanations in principled ways.

Attachment

Video Recording