Playlist: 16 videos

Interpretable Machine Learning in Natural and Social Sciences

Remote video URL
0:45:30
David Danks (University of California, San Diego)
https://simons.berkeley.edu/talks/goals-and-interpretable-variables-neuroscience
Interpretable Machine Learning in Natural and Social Sciences

Modern cognitive neuroscience often requires us to identify causal "objects" (perhaps spatial aggregates, perhaps more complex dynamic objects) that can function in our neuroscientific theories. Moreover, we often hope or require that these "objects" be neuroscientifically understandable (or plausible). Of course, the brain does not come neatly segmented or packaged into appropriate aggregates or objects; rather, these objects are themselves the product of scientific work, and which objects we get depend on the goals that we have. I will argue that different goals map onto different learning criteria, which then map onto different extant methods in cognitive neuroscience. The philosophical and technical challenge is that these different methods can yield incompatible outputs, particularly if we require interpretability, and so we seem to be led towards a problematic pluralism. I will conclude by considering several ways to try to avoid problematic inconsistencies and conflicts between our theories.
Visit talk page
Remote video URL
0:30:55
Sina Fazelpour (Northeastern University)
https://simons.berkeley.edu/talks/tbd-427
Interpretable Machine Learning in Natural and Social Sciences

In many applications of societal concern, algorithmic outputs are not decisions, but only predictive recommendations provided to humans who ultimately make the decision. However, current technical and ethical analyses of such algorithmic systems tend to treat them as autonomous entities. In this talk, I draw on works in group dynamics and decision-making to show how viewing algorithmic systems as part of human-AI teams can change our understanding of key characteristics of these systems—with a focus on accuracy and interpretability—and any potential trade-off between them. I will discuss how this change of perspective (i) can guide the development of functional and behavioral measures for evaluating the success of interpretability methods and (ii) challenge existing ethical and policy proposals about the relative value of interpretability.
Visit talk page
Remote video URL
1:6:25
David Danks (University of California, San Diego), and Sina Fazelpour (Northeastern University)
https://simons.berkeley.edu/talks/panel-societal-dimensions-explanation
Interpretable Machine Learning in Natural and Social Sciences
Visit talk page
Remote video URL
0:38:40
Alex D'Amour (Google Brain)
https://simons.berkeley.edu/talks/conceptual-challenges-connecting-interpretability-and-causality
Interpretable Machine Learning in Natural and Social Sciences

There has been a strong intuition in the Machine Learning community that interpretability and causality ought to have a strong connection. However, the community has not arrived at consensus about how to formalize this connection. In this talk, I will raise questions about conceptual and technical ambiguities that I think make this connection hard to specify. The goal of the talk is to raise points for discussion, expressed in causal formalism, rather than to provide answers.
Visit talk page
Remote video URL
0:38:32
Joe Halpern (Cornell University)
https://simons.berkeley.edu/talks/explanation-abridged-survey
Interpretable Machine Learning in Natural and Social Sciences

I consider a definition of (causal) explanation that is a variant of one Judea Pearl and I gave. The definition is based on the notion of actual cause. Essentially, an explanation is a fact that is not known for certain but, if found to be true, would constitute an actual cause of the fact to be explained, regardless of the agent's initial uncertainty. I show that the definition handles well a number of problematic examples from the literature, and discuss various notions of partial explanation.
Visit talk page
Remote video URL
0:49:21
Alex D'Amour (Google Brain), and Joe Halpern (Cornell University)
https://simons.berkeley.edu/talks/panel-causality
Interpretable Machine Learning in Natural and Social Sciences
Visit talk page
Remote video URL
0:30:46
Michele Ceriotti (Swiss Federal Institute of Technology in Lausanne) [REMOTE]
https://simons.berkeley.edu/talks/interpretability-atomic-scale-machine-learning-0
Interpretable Machine Learning in Natural and Social Sciences

I will provide a brief overview of some of the established frameworks used to apply machine-learning techniques to the atomistic modeling of matter, and in particular to the construction of surrogate models for quantum mechanical calculations. I will focus in particular on the construction of physics-aware descriptors of the atomic structure - based on symmetrized correlations of the atom density - and how they facilitate the interpretation of regression and classification models based on them.
Visit talk page
Remote video URL
0:31:25
David Limmer (University of California, Berkeley)
https://simons.berkeley.edu/talks/tbd-433
Interpretable Machine Learning in Natural and Social Sciences
Visit talk page
Remote video URL
0:56:52
Michele Ceriotti (Swiss Federal Institute of Technology in Lausanne) [REMOTE], and David Limmer (University of California, Berkeley)
https://simons.berkeley.edu/talks/panel-interpretability-physical-sciences
Interpretable Machine Learning in Natural and Social Sciences
Visit talk page
Remote video URL
0:38:1
Anshul Kundaje (Stanford University) [REMOTE]
https://simons.berkeley.edu/talks/interpreting-deep-learning-models-functional-genomics-data-decode-regulatory-sequence-syntax
Interpretable Machine Learning in Natural and Social Sciences

The human genome sequence contains the fundamental code that defines the identity and function of all the cell types and tissues in the human body. Genes are functional sequence units that encode for proteins. But they account for just about 2% of the 3 billion long human genome sequence. What does the rest of the genome encode? How is gene activity controlled in each cell type? Where do the gene control units lie in the genome and what is their sequence code? How do variants and mutations in the genome sequence affect cellular function and disease? Regulatory instructions for controlling gene activity are encoded in the DNA sequence of millions of cell type specific regulatory DNA elements in the form of functional sequence syntax. This regulatory code has remained largely elusive despite exciting developments in experimental techniques to profile molecular properties of regulatory DNA. To address this challenge, we have developed high performance neural networks that can learn de-novo representations of regulatory DNA sequence to map genome-wide molecular profiles of protein DNA interactions and biochemical activity at single base resolution across 1000s of cellular contexts while accounting for experimental biases. We have developed methods to interpret DNA sequences through the lens of the models and extract local and global predictive syntactic patterns revealing many causal insights into the regulatory code. Our models also serve as in-silico oracles to predict the effects of natural and disease-associated genetic variation i.e. how differences in DNA sequence across healthy and diseased individuals are likely to affect molecular mechanisms associated with common and rare diseases. Our predictive models serve as an interpretable lens for genomic discovery.
Visit talk page