Description

 

What does it mean to “understand” a phenomenon, a domain, or a situation?

Machine learning systems have long been labeled “opaque,” “black boxes,” or plain “dumb” for not “understanding” the purpose or the implications of their predictions. But what does it take to qualify as an “understander,” and what computational capacities are needed to meet these requirements?

In this talk, Judea Pearl will propose a formal definition of “understanding” as the capacity to answer questions of three types: predictions, actions, and imagination. He will describe a computational model, a language, and a calculus that facilitate reasoning at these three levels, and demonstrate how features normally associated with understanding follow from this model. They include generating explanations, generalization across domains, integrating data from several sources, recovery from missing data, and more. Pearl will conclude by describing future horizons, including automated scientific explorations, personalized decision-making, and social intelligence.

Judea Pearl is chancellor professor of computer science and statistics at UCLA, where he directs the Cognitive Systems Laboratory and conducts research in artificial intelligence, human cognition, and philosophy of science.

He has authored three fundamental books, Heuristics (1984), Probabilistic Reasoning in Intelligent Systems (1988), and Causality (2000, 2009), which won the London School of Economics’ Lakatos Award for 2001. More recently, he co-authored Causal Inference in Statistics (2016, with M. Glymour and N. Jewell) and The Book of Why (2018, with Dana Mackenzie), which brings causal analysis to a general audience.

Pearl is a member of the National Academy of Sciences and the National Academy of Engineering and is a fellow of the Cognitive Science Society, the Royal Statistical Society, and the Association for the Advancement of Artificial Intelligence. In 2011, he won the Technion’s Harvey Prize and the ACM’s A.M. Turing Award “for fundamental contributions to artificial intelligence through the development of a calculus for probabilistic and causal reasoning.”

In 2022 he won a BBVA Foundation Frontiers of Knowledge Award for “laying the foundations of modern artificial intelligence, so computer systems can process uncertainty and relate causes to effects.”

 

If you require accommodation for communication, please contact our Access Coordinator at simonsevents [at] berkeley.edutarget="_blank" with as much advance notice as possible.

This event will be held in person and virtually. Those who plan to participate participate fully on Zoom, click [here].

Please read on for important information regarding logistics for those planning to register to attend the workshop in-person at Calvin Lab.

Proof of Vaccination
Given current public health directives from state, local, and university authorities, all participants in Simons Institute events must be prepared to demonstrate proof of full vaccination: a vaccination card or photo of the card along with a valid photo ID, or a green or blue Campus Access Badge via the UC Berkeley Mobile app (additional details regarding proof of vaccination can be found here).

Masks
Masks are strongly encouraged for all participants. The latest masking requirements on campus can be found here.

Refreshments
Light refreshments will be provided before the lecture. Please note due to current health conditions, we will set up just outside the building. There will be signs to direct you. Please note there is no food or drink allowed in the auditorium. Thank you for helping us to keep the auditorium clean.

Please note: the Simons Institute regularly captures photos and video of activity around the Institute for use in videos, publications, and promotional materials. 

YouTube Video
Remote video URL