Skip to main content
Search
Utility navigation
Calendar
Contact
Login
MAKE A GIFT
Main navigation
Home
Programs & Events
Research Programs
Workshops & Symposia
Public Lectures
Research Pods
Internal Program Activities
People
Scientific Leadership
Staff
Current Long-Term Visitors
Research Fellows
Postdoctoral Researchers
Scientific Advisory Board
Governance Board
Industry Advisory Council
Affiliated Faculty
Science Communicators in Residence
Law and Society Fellows
Participate
Apply to Participate
Plan Your Visit
Location & Directions
Postdoctoral Research Fellowships
Law and Society Fellowships
Science Communicator in Residence Program
Circles
Breakthroughs Workshops and Goldwasser Exploratory Workshops
Support
Annual Fund
Funders
Industrial Partnerships
Academic Partnerships
News & Videos
News
Videos
About
Deep Learning Theory Symposium
Program
Machine Learning Research Pod
Location
Calvin Lab auditorium and Zoom
Date
Monday, Dec. 6
–
Tuesday, Dec. 7, 2021
Back to calendar
Breadcrumb
Home
Workshop & Symposia
Deep Learning Theory Symposium
Secondary tabs
The Workshop
Schedule
(active tab)
Videos
All talk times are in Pacific Time.
Monday, Dec. 6, 2021
9
–
9:10 a.m.
Opening Remarks
9:10
–
10:10 a.m.
Uniform Convergence of Interpolators: Gaussian Width, Norm Bounds, and Benign Overfitting
Frederic Koehler (Simons Institute)
10:10
–
11:10 a.m.
Adaptive Wavelet Distillation from DNNs and Dictionary Learning
Bin Yu (UC Berkeley)
11:10
–
11:30 a.m.
Break
11:30
–
11:45 a.m.
Graphon Neural Networks and the Transferability of Graph Neural Networks
Luiz Chamon (UC Berkeley)
11:45 a.m.
–
12 p.m.
Self-Training Converts Weak Learners to Strong Learners in Mixture Models
Spencer Frei (UC Berkeley)
12
–
1 p.m.
Lunch
1
–
1:15 p.m.
Non-Parametric Convergence Rates for Plain Vanilla Stochastic Gradient Descent
Rapha
,
ël Berthier
1:15
–
1:30 p.m.
Sharp Matrix Concentration
March Boedihardjo (University of California, Irvine)
1:30
–
3 p.m.
Break
3
–
4 p.m.
Reception
Tuesday, Dec. 7, 2021
9
–
10 a.m.
Learning Staircases
Emmanuel Abbe (École polytechnique fédérale de Lausanne)
,
Enric Boix (MIT)
,
Theodor Misiakiewicz (Stanford University)
10
–
11 a.m.
Exact Asymptotics and Universality for Gradient Flows and Empirical Risk Minimizers
Andrea Montanari (Stanford University)
11
–
11:20 a.m.
Break
11:20
–
11:35 a.m.
The Spectrum of Nonlinear Random Matrices for Ultra-Wide Neural Networks
Yizhe Zhu (University of California, Irvine)
11:35
–
11:50 a.m.
Universality of Neural Networks
Dan Mikulincer (MIT)
11:50 a.m.
–
1 p.m.
Lunch
1
–
1:15 p.m.
Is Overfitting Actually Benign? On the Consistency of Interpolating Methods
Preetum Nakkiran (UCSD)
1:15
–
1:30 p.m.
On the Cryptographic Hardness of Learning One-Hidden Layer Neural Networks
Ilias Zadik (MIT)
1:30
–
1:50 p.m.
Break
1:50
–
2:50 p.m.
Representation Costs of Linear Neural Networks: Analysis and Design
Mina Karzand (University of California, Davis)
Share this page
Copy URL of this page
link to homepage
Close
Main navigation
Home
Programs & Events
Research Programs
Workshops & Symposia
Public Lectures
Research Pods
Internal Program Activities
People
Scientific Leadership
Staff
Current Long-Term Visitors
Research Fellows
Postdoctoral Researchers
Scientific Advisory Board
Governance Board
Industry Advisory Council
Affiliated Faculty
Science Communicators in Residence
Law and Society Fellows
Participate
Apply to Participate
Plan Your Visit
Location & Directions
Postdoctoral Research Fellowships
Law and Society Fellowships
Science Communicator in Residence Program
Circles
Breakthroughs Workshops and Goldwasser Exploratory Workshops
Support
Annual Fund
Funders
Industrial Partnerships
Academic Partnerships
News & Videos
News
Videos
About
Utility navigation
Calendar
Contact
Login
MAKE A GIFT
link to homepage
Close
Search