Programs
Fall 2020

Theory of Reinforcement Learning

Aug. 19Dec. 18, 2020

Because of the uncertainty caused by COVID-19, it is still unclear if this program will take place in-person or online only. This page will be updated as soon as we have more information.

The program aims to advance the theoretical foundations of reinforcement learning (RL) and foster new collaborations between researchers across RL and computer science.

Recent years have seen a surge of interest in reinforcement learning, fueled by exciting new applications of RL techniques to various problems in artificial intelligence, robotics, and natural sciences. Many of these advances were made possible by a combination of large scale computation, innovative use of flexible neural network architectures and training methods, and new and classical reinforcement learning algorithms. However, we lack a solid understanding of when, why and to what extent these algorithms work.

Reinforcement learning's core issues, such as efficiency of exploration and the tradeoff between the scale and the difficulty of learning and planning, have received concerted study in the last few decades by many disciplines and communities, including computer science, numerical analysis, artificial intelligence, control theory, operations research, and statistics. The result has been a solid body of work that has built and resolved some of the core problems; yet, the most pressing problems, concerning how one can design highly scalable algorithms, still remain open.

This program aims to reunite researchers across disciplines that have played a role in developing the theory of reinforcement learning. It will review past developments and identify promising directions of research, with an emphasis on addressing existing open problems, ranging from the design of efficient, scalable algorithms for exploration to how to control learning and planning. It also aims to deepen the understanding of model-free vs. model-based learning and control, and the design of efficient methods to exploit structure and adapt to easier environments.

sympa [at] lists.simons.berkeley.edu (body: subscribe%20rl2020announcements%40lists.simons.berkeley.edu) (Click here to subscribe to our announcements email list for this program).

Organizers:

Csaba Szepesvari (University of Alberta, Google DeepMind; chair), Emma Brunskill (Stanford University), Sébastien Bubeck (Microsoft Research; Visiting Scientist and Workshop Organizer), Alan Malek (DeepMind), Sean Meyn (University of Florida), Ambuj Tewari (University of Michigan), Mengdi Wang (Princeton University)

Long-Term Participants (including Organizers):

Vivek Shripad Borkar (Indian Institute of Technology Bombay), Emma Brunskill (Stanford University), Sébastien Bubeck (Microsoft Research; Visiting Scientist and Workshop Organizer), Rene Carmona (Princeton University), Marco Dalai (University of Brescia), Anupam Gupta (Carnegie Mellon University), Niao He (University of Illinois at Urbana-Champaign), Chi Jin (Princeton University), Mihailo Jovanovic (University of Southern California), Wouter Koolen (Centrum Wiskunde & Informatica), Akshay Krishnamurthy (Microsoft Research; Visiting Scientist), Jason Lee (Princeton University), Lihong Li (Google Brain; Visiting Scientist), Tengyu Ma (Stanford University), Sean Meyn (University of Florida), Eric Moulines (Ecole Polytechnique), Seffi Naor (Technion - Israel Institute of Technology), Angelia Nedich (Arizona State University), Gergely Neu (UPF), Marek Petrik (University of New Hampshire), Balaraman Ravindran (IIT Madras), Daniel Russo (Columbia University), Barna Saha (UC Berkeley), Bruno Scherrer (INRIA), Dale Schuurmans (University of Alberta), Aaron Sidford (Stanford University), Csaba Szepesvari (University of Alberta, Google DeepMind; chair), Ambuj Tewari (University of Michigan), Claire Tomlin (UC Berkeley), Mathukumalli Vidyasagar (IIT Hyderabad), Stefan Wager (Stanford Graduate School of Business), Mengdi Wang (Princeton University), Huizhen Yu (University of Alberta)

Research Fellows:

Jalaj Bhandari (Columbia University), Lin Chen (Yale University), Vidya Muthukumar (UC Berkeley; Google Research Fellow), Mohamad Kazem Shirani Faradonbeh (University of Florida), Zhaoran Wang (Northwestern University), Lin Yang (University of California, Los Angeles; Facebook/Novi Research Fellow), Zhuoran Yang (Princeton University; VMware Research Fellow), Christina Yu (Cornell University)

Visiting Graduate Students and Postdocs:

Shantanu Prasad Burnwal (IIT Hyderabad), Dylan Foster (Massachusetts Institute of Technology (MIT)), Germano Gabbianelli (Universitat Pompeu Fabra), Michael Konobeev (University of Alberta), Yao Liu (Stanford), Aditya Modi (University of Michigan, Ann Arbor), Wang Ruosong (Carnegie Mellon University), Sergey Samsonov (National Research University Higher School of Economics), Roshan Shariff (University of Alberta), Sean Sinclair (Cornell University), Chen-Yu Wei (University of Southern California), Andrea Zanette (Stanford University)

Workshops

Aug. 31Sep. 4, 2020

Organizers:

Csaba Szepesvari (University of Alberta, Google DeepMind; chair), Emma Brunskill (Stanford University), Sébastien Bubeck (MSR), Alan Malek (DeepMind), Sean Meyn (University of Florida), Ambuj Tewari (University of Michigan), Mengdi Wang (Princeton)
Sep. 28Oct. 2, 2020

Organizers:

Lihong Li (Google Brain; chair), Marc G. Bellemare (Google Brain)
Oct. 26Oct. 30, 2020

Organizers:

Shipra Agrawal (Columbia University; chair), Sébastien Bubeck (MSR), Alan Malek (DeepMind)
Nov. 30Dec. 4, 2020

Organizers:

Mengdi Wang (Princeton; chair), Emma Brunskill (Stanford University), Sean Meyn (University of Florida)

Those interested in participating in this program should send an email to the organizers at this rl2020 [at] lists.simons.berkeley.edu (at this address).