In many areas of machine learning, theory and practice have undergone a dramatic divergence; there is extremely little theory to guide our understanding of much of modern AI. Theory is useful in guiding practice when it can help explain or predict empirical performance, but even in the increasingly rare settings where we have substantial theory to accompany practice, the theory often makes pessimistic predictions despite demonstrated empirical success.
Some of what’s likely called for is revolutionary new theory. In this talk, however, Katrina Ligett will explore a more conservative idea: that we sometimes approach the theory of learning in a way that leaves money on the table. In particular, she will suggest that there is room for reformulating theoretical questions in machine learning in new ways that can help close the gaps between theory and practice, and that these reformulations can suggest new mathematical challenges that are interesting in their own right.
Machine learning theory often considers worst-case computations on worst-case data distributions, but empirically we may care more about the performance of a particular computation on a particular dataset that may enjoy favorable structure. One approach to leaving less money on the table involves taking advantage of the properties of our algorithms that lead to their success; another approach considers instance-specific (as opposed to worst-case) analysis. Ligett will illustrate these ideas with a few examples from her own work on generalization and on privacy in machine learning, in the hopes of surfacing perspectives that may help us all chip away at the gaps between theory and practice.
Katrina Ligett is a full professor in the School of Computer Science and Engineering at Hebrew University, where she is also the director of the interdisciplinary Federmann Center for the Study of Rationality. Before joining Hebrew University, she was a faculty member in computer science and economics at Caltech. Ligett’s primary research interests are in data privacy, algorithmic fairness, machine learning theory, and algorithmic game theory. She received her PhD in computer science from Carnegie Mellon University in 2009 and did her postdoc at Cornell University. She is a recipient of an NSF CAREER award, a Microsoft Research Faculty Fellowship, and an ERC grant.
Refreshments will be served at 3 p.m., before the event.
The Richard M. Karp Distinguished Lectures were created in Fall 2019 to celebrate the role of Simons Institute Founding Director Dick Karp in establishing the field of theoretical computer science, formulating its central problems, and contributing stunning results in the areas of computational complexity and algorithms. Formerly known as the Simons Institute Open Lectures, the series features visionary leaders in the field of theoretical computer science and is geared toward a broad scientific audience.
The lecture recording URL will be emailed to registered participants. This URL can be used for immediate access to the livestream and recorded lecture. Lecture recordings will be publicly available on SimonsTV about five days following each presentation unless otherwise noted.
The Simons Institute regularly captures photos and video of activity around the Institute for use in publications and promotional materials.
If you require special accommodation, please contact our access coordinator at simonsevents@berkeley.edu with as much advance notice as possible.
All scheduled dates:
Upcoming
Past
No Past activities yet