About

This cluster will convene an interdisciplinary group of scholars to develop firm theoretical and philosophical foundations for addressing some major issues concerning interpretability of machine learning⁠–based models. Program participants include experts in theoretical computer science, machine learning, statistics, causal inference, and fairness and the present community on interpretability. We aim to address the following fundamental questions about how to best use machine learning for real-life tasks:

  1. What notions of interpretability might render models that are more amenable to monitoring by regulators?
  2. How can the quality and usefulness of interpretation in a given context (such as a particular human audience and for a particular domain problem) be evaluated both empirically and theoretically?
  3. For which of the desiderata that interpretability purports to address must we sacrifice predictive accuracy?
  4. Are there any feasibly measurable properties of neural networks that can yield significant insights into their input-output functionality? More generally, are there sound theoretical principles under which today’s deep learning tools can be leveraged to confer insights beyond their predictive accuracy?
  5. What role, if any, do various interpretation or explanation techniques have to offer the discourse on algorithmic fairness and discrimination? Are there any inherent trade-offs between notions of interpretability and fairness?

The cluster will address a variety of perspectives on defining and developing tools for achieving these goals in automated decision-making systems. 

Organizers

Ruth Urner (Max Planck Institute for Intelligent Systems, Tuebingen)

Long-Term Participants (including Organizers)

Ruth Urner (Max Planck Institute for Intelligent Systems, Tuebingen; Google Research Fellow)
Nati Srebro (Toyota Technological Institute at Chicago)

Visiting Graduate Students and Postdocs