Deep learning is the engine powering many of the recent successes of artificial intelligence. These advances stem from a research effort spanning academia and industry; this effort is not limited only to computer science, statistics, and optimization, but also involves neuroscience, physics, and essentially all of the sciences. Despite this intense research activity, however, a satisfactory understanding of deep learning methodology – and, more importantly, its failure modes – continues to elude us. As deep learning enters sensitive domains, such as autonomous driving and healthcare, the need for building this kind of understanding becomes even more pressing.
The goal of this program is to address this need by aligning and focusing theoretical and applied researchers on the common purpose of building empirically-relevant theoretical foundations of deep learning. Specifically, the intention is to identify and make progress on challenges that, on one hand, are key to guiding the real-world use of deep learning and, on the other hand, can be approached using theoretical methodology.
The program will focus on the following four themes:
- Optimization: how and why can deep models be fit to observed (training) data?
- Generalization: why do these trained models work well on similar but unobserved (test) data?
- Robustness: how can we analyze and improve the performance of these models when applied outside their intended conditions?
- Generative methods: how can deep learning be used to model probability distributions?
An integral feature of the program will be bridging activities that aim to strengthen the connections between academia and industry. In particular, in addition to workshops and other weekly events, the program will host weekly bridging days that bring together local Bay Area industry researchers and regular program participants.
sympa [at] lists [dot] simons [dot] berkeley [dot] edu (body: subscribe%20dl2019announcements%40lists.simons.berkeley.edu) (Click here to subscribe to our announcements email list for this program).
Samy Bengio (Google; technical advisor), Aleksander Mądry (Massachusetts Institute of Technology), Elchanan Mossel (Massachusetts Institute of Technology), Matus Telgarsky (University of Illinois, Urbana-Champaign)
List of participants (tentative list, including organizers):
Peter Bartlett (UC Berkeley), Misha Belkin (Ohio State University), Shai Ben-David (University of Waterloo), Emma Brunskill (Stanford University), Amit Daniely (Hebrew University & Google), Costis Daskalakis (MIT), Alex Dimakis (University of Texas at Austin), Laurent El Ghaoui (UC Berkeley), Suriya Gunasekar (Toyota Technology Institute, Chicago), Daniel Hsu (Columbia University), Varun Jog (UW-Madison), Adam Klivans (University of Texas at Austin), Jason Lee (University of Southern California), Po-Ling Loh (UW-Madison), Tengyu Ma (Stanford University), Elchanan Mossel (Massachusetts Institute of Technology), Daniel Roy (University of Toronto), Mahdi Soltanolkotabi (University of Southern California), Dawn Song (UC Berkeley), Daniel Soudry (Technion - Israel Institute of Technology), Nati Srebro (Toyota Technological Institute at Chicago), Matus Telgarsky (University of Illinois at Urbana-Champaign), Nisheeth Vishnoi (EPFL/Yale), Rachel Ward (University of Texas at Austin)
List of weekly visitors:
Yasaman Bahri (Google Brain), Samy Bengio (Google), Paul Christiano (OpenAI), Inderjit Dhillon (Amazon), Vitaly Feldman (Google Brain), T.S. Jayram (IBM Almaden), Tomer Komer (Google Brain), Ming-Yu Liu (NVidia), Phil Long (Google Brain), Nimrod Megiddo (IBM Almaden), Ofer Meshi (Google), Ilya Mironov (Google Brain), Hossein Mobahi (Google), Rina Panigrahy (Google Brain), Maithra Raghu (Google Brain), Ali Rahimi (Google), Sam Schoenholz (Google Brain), Hanie Sedghi (Google Brain), Yoram Singer (Google Brain), Jascha Sohl-Dickstein (Google Brain), Kunal Talwar (Google Brain)
Those interested in participating in this program should send an email to the organizers at this dl2019 [at] lists [dot] simons [dot] berkeley [dot] edu (at this address).