# Abstracts

### Tuesday, May 28th, 2019

9:20 am9:30 am

No abstract available.

9:30 am10:50 am
Speaker: Peter Bartlett (UC Berkeley) and Sasha Rakhlin (Massachusetts Institute of Technology)

We review tools useful for the analysis of the generalization performance of deep neural networks on classification and regression problems.  We review uniform convergence properties, which show how this performance depends on notions of complexity, such as Rademacher averages, covering numbers, and combinatorial dimensions, and how these quantities can be bounded for neural networks.  We also review the analysis of the performance of nonparametric estimation methods such as nearest-neighbor rules and kernel smoothing.  Deep networks raise some novel challenges, since they have been observed to perform well even with a perfect fit to the training data.  We review some recent efforts to understand the performance of interpolating prediction rules, and highlight the questions raised for deep learning.

11:10 am12:30 pm
Speaker: Matus Telgarsky (University of Illinois, Urbana-Champaign)

.

2:00 pm3:20 pm
Speaker: Peter Bartlett (UC Berkeley) and Sasha Rakhlin (Massachusetts Institute of Technology)

We review tools useful for the analysis of the generalization performance of deep neural networks on classification and regression problems.  We review uniform convergence properties, which show how this performance depends on notions of complexity, such as Rademacher averages, covering numbers, and combinatorial dimensions, and how these quantities can be bounded for neural networks.  We also review the analysis of the performance of nonparametric estimation methods such as nearest-neighbor rules and kernel smoothing.  Deep networks raise some novel challenges, since they have been observed to perform well even with a perfect fit to the training data.  We review some recent efforts to understand the performance of interpolating prediction rules, and highlight the questions raised for deep learning.

3:40 pm5:00 pm

No abstract available.

### Wednesday, May 29th, 2019

9:30 am10:50 am
Speaker: Peter Bartlett (UC Berkeley) and Sasha Rakhlin (Massachusetts Institute of Technology)

We review tools useful for the analysis of the generalization performance of deep neural networks on classification and regression problems.  We review uniform convergence properties, which show how this performance depends on notions of complexity, such as Rademacher averages, covering numbers, and combinatorial dimensions, and how these quantities can be bounded for neural networks.  We also review the analysis of the performance of nonparametric estimation methods such as nearest-neighbor rules and kernel smoothing.  Deep networks raise some novel challenges, since they have been observed to perform well even with a perfect fit to the training data.  We review some recent efforts to understand the performance of interpolating prediction rules, and highlight the questions raised for deep learning.

11:10 am12:30 pm
Speaker: Jason Lee (University of Southern California)

We survey recent developments in the optimization and learning of deep neural networks. The three focus topics are on: 1) geometric results for the optimization of neural networks , 2) Overparametrized neural networks in the kernel regime (Neural Tangent Kernel) and its implications and limitations , and 3) potential strategies to prove SGD improves on kernel predictors.

2:00 pm3:20 pm
Speaker: Peter Bartlett (UC Berkeley) and Sasha Rakhlin (Massachusetts Institute of Technology)

We review tools useful for the analysis of the generalization performance of deep neural networks on classification and regression problems.  We review uniform convergence properties, which show how this performance depends on notions of complexity, such as Rademacher averages, covering numbers, and combinatorial dimensions, and how these quantities can be bounded for neural networks.  We also review the analysis of the performance of nonparametric estimation methods such as nearest-neighbor rules and kernel smoothing.  Deep networks raise some novel challenges, since they have been observed to perform well even with a perfect fit to the training data.  We review some recent efforts to understand the performance of interpolating prediction rules, and highlight the questions raised for deep learning.

3:40 pm5:00 pm
Speaker: Sébastien Bubeck (Microsoft Research)

Modern machine learning models (i.e., neural networks) are incredibly sensitive to small perturbations of their input. This creates potentially critical security breach in many deep learning applications (object detection, ranking systems, etc). In this talk I will cover some of what we know and what we don't know about this phenomenon of adversarial examples". I will focus on three topics: (i) generalization (do you need more data than for standard ML?), (ii) inevitability of adversarial examples (is this problem unsolvable?), and (iii) certification techniques (how do you provably --and efficiently-- guarantee robustness?).

### Thursday, May 30th, 2019

9:30 am10:50 am
Speaker: Jason Lee (University of Southern California)

We survey recent developments in the optimization and learning of deep neural networks. The three focus topics are on: 1) geometric results for the optimization of neural networks , 2) Overparametrized neural networks in the kernel regime (Neural Tangent Kernel) and its implications and limitations , and 3) potential strategies to prove SGD improves on kernel predictors.

11:10 am12:30 pm
Speaker: Kamalika Chaudhuri (UC San Diego)

No abstract available.

2:00 pm3:20 pm
Speaker: Nati Srebro (Toyota Technological Institute at Chicago)

No abstract available.

3:40 pm5:00 pm
Speaker: Nati Srebro (Toyota Technological Institute at Chicago)

No abstract available.

### Friday, May 31st, 2019

9:30 am10:50 am
Speaker: Elchanan Mossel (Massachusetts Institute of Technology)

No abstract available.

11:10 am12:30 pm
Speaker: Kamalika Chaudhuri (UC San Diego)

No abstract available.