Playlist: 16 videos

Interpretable Machine Learning in Natural and Social Sciences

Remote video URL
0:44:46
Jack Gallant (University of California, Berkeley)
https://simons.berkeley.edu/talks/tbd-435
Interpretable Machine Learning in Natural and Social Sciences

The mammalian brain is an extremely complicated, dynamical deep network. Systems, cognitive and computational neuroscientists seek to understand how information is represented throughout this network, and how these representations are modulated by attention and learning. Machine learning provides many tools useful for analyzing brain data recorded in neuroimaging, neurophysiology and optical imaging experiments. For example, deep neural networks trained to perform complex tasks can be used as a source of features for data analysis, or they can be trained directly to model complex data sets. Although artificial deep networks can produce complex models that accurately predict brain responses under complex conditions, the resulting models are notoriously difficult to interpret. This limits the utility of deep networks for neuroscience, where interpretation is often prized over absolute prediction accuracy. In this talk I will review two approaches that can be used to maximize interpretability of artificial deep networks and other machine learning tools when applied to brain data. The first approach is to use deep networks as a source of features for regression-based modeling. The second is to use deep learning infrastructure to construct sophisticated computational models of brain data. Both these approaches provide a means to produce high-dimensional quantitative models of brain data recorded under complex naturalistic conditions, while maximizing interpretability.
Visit talk page
Remote video URL
0:42:40
Debbie Marks (Harvard University), Anshul Kundaje (Stanford University) [REMOTE], and Jack Gallant (University of California, Berkeley)
https://simons.berkeley.edu/talks/panel-interpretability-biological-sciences
Interpretable Machine Learning in Natural and Social Sciences
Visit talk page
Remote video URL
0:33:40
Alice Xiang (Sony AI) [REMOTE]
https://simons.berkeley.edu/talks/tbd-437
Interpretable Machine Learning in Natural and Social Sciences

Interpretability is a key component of many dimensions of building more trustworthy ML systems. In this talk, I will focus on a couple intersections between interpretability and algorithmic fairness. First, I will discuss some of the promises and challenges of causality for diagnosing sources of algorithmic bias. In particular, defining the nature and timing of interventions on immutable characteristics is highly important for appropriate causal inference but can create challenges in practice given data limitations. Second, I will discuss the strategy of collecting more diverse datasets for alleviating biases in computer vision models. Defining and measuring diversity of human appearance remains a significant challenge, especially given privacy concerns around sensitive attribute labels. To address this, I will present a method for learning interpretable dimensions of human diversity from unlabeled datasets.
Visit talk page
Remote video URL
0:34:6
Rebecca Wexler (Berkeley Center for Law & Technology) [REMOTE]
https://simons.berkeley.edu/talks/legal-barriers-interpretable-machine-learning
Interpretable Machine Learning in Natural and Social Sciences

Making machine learning systems interpretable can require access to information and resources. Whether that means access to data, to models, to executable programs, to research licenses, to validation studies, or more, various legal doctrines can sometimes get in the way. This talk will explain how intellectual property laws, privacy laws, and contract laws can block the access needed to implement interpretable machine learning, and will suggest avenues for reform to minimize these legal barriers.
Visit talk page
Remote video URL
0:32:15
Aleksandra Korolova (University of Southern California) [REMOTE]
https://simons.berkeley.edu/talks/panel-interpretability-physical-sciences
Interpretable Machine Learning in Natural and Social Sciences

Relevance estimators are algorithms used by social media platforms to determine what content is shown to users and its presentation order. These algorithms aim to personalize the platform's experience for users, increasing engagement and, therefore, platform revenue. However, many have concerns that the relevance estimation and personalization algorithms are opaque and can produce outcomes that are harmful to individuals or society. Legislations have been proposed in both the U.S. and the E.U. that mandate auditing of social media algorithms by external researchers. But auditing at scale risks disclosure of users' private data and platforms' proprietary algorithms, and thus far there has been no concrete technical proposal that can provide such auditing. We propose a new method for platform-supported auditing that can meet the goals of the proposed legislations
Visit talk page
Remote video URL
1:6:16
Alice Xiang (Sony AI), Aleksandra Korolova (University of Southern California), and Rebecca Wexler (Berkeley Center for Law & Technology) [REMOTE]
https://simons.berkeley.edu/talks/panel-interpretability-law
Interpretable Machine Learning in Natural and Social Sciences
Visit talk page