Abstract

What concept classes can neural networks provably learn, in the distribution-free PAC learning setting? Recently, there is a sequence of works relating the learning process of over-parametrized neural networks to kernel methods, and in particular, the neural tangent kernels. These results establish the theory that neural networks can at least PAC-learn the concept classes learnable by kernel methods. In this talk, I will discuss some recent progress that goes beyond this approach, where one can show that over-parametrized neural networks provably PAC-learn some concept classes with (provably) better generalization than any kernel methods.

Video Recording