Because of the uncertainty caused by COVID-19, it is still unclear if this program will take place in-person or online only. This page will be updated as soon as we have more information.
The program aims to advance the theoretical foundations of reinforcement learning (RL) and foster new collaborations between researchers across RL and computer science.
Recent years have seen a surge of interest in reinforcement learning, fueled by exciting new applications of RL techniques to various problems in artificial intelligence, robotics, and natural sciences. Many of these advances were made possible by a combination of large scale computation, innovative use of flexible neural network architectures and training methods, and new and classical reinforcement learning algorithms. However, we lack a solid understanding of when, why and to what extent these algorithms work.
Reinforcement learning's core issues, such as efficiency of exploration and the tradeoff between the scale and the difficulty of learning and planning, have received concerted study in the last few decades by many disciplines and communities, including computer science, numerical analysis, artificial intelligence, control theory, operations research, and statistics. The result has been a solid body of work that has built and resolved some of the core problems; yet, the most pressing problems, concerning how one can design highly scalable algorithms, still remain open.
This program aims to reunite researchers across disciplines that have played a role in developing the theory of reinforcement learning. It will review past developments and identify promising directions of research, with an emphasis on addressing existing open problems, ranging from the design of efficient, scalable algorithms for exploration to how to control learning and planning. It also aims to deepen the understanding of model-free vs. model-based learning and control, and the design of efficient methods to exploit structure and adapt to easier environments.
sympa [at] lists.simons.berkeley.edu (body: subscribe%20rl2020announcements%40lists.simons.berkeley.edu) (Click here to subscribe to our announcements email list for this program).
Csaba Szepesvári (DeepMind & University of Alberta; chair), Emma Brunskill (Stanford University), Sébastien Bubeck (MSR), Alan Malek (MIT), Sean Meyn (University of Florida), Ambuj Tewari (University of Michigan), and Mengdi Wang (Princeton)
List of Participants (initial tentative list, including organizers):
Peter Bartlett (UC Berkeley), Sébastien Bubeck (MSR), Rene Carmona (Princeton), Mohammad Ghavamzadeh (FAIR), Peter Glynn (Stanford), Anupam Gupta (CMU), Rahul Jain (USC), Mihailo Jovanovic (USC), Wouter Koolen (CWI), Lihong Li (Google Brain), Tengyu Ma (Stanford), Alan Malek (MIT), Shie Mannor (Technion), Yishay Mansour (Tel Aviv University), Eric Moulines (Ecole Polytechnique), Angelia Nedich (ASU), Gergely Neu (UPF), Marek Petrik (U New Hampshire), Balaraman Ravindran (IIT Madras), Bruno Scherrer (INRIA), Dale Schuurmans (Google/University of Alberta), Devavrat Shah (MIT), Aaron Sidford (Stanford), Csaba Szepesvári (DeepMind/U of Alberta), Eva Tardos (Cornell), Ambuj Tewari (U Michigan), Mathukumalli Vidyasagar (IIT Hyderabad), Mengdi Wang (Princeton), Huizhen Yu (University of Alberta)
Jalaj Bhandari (Columbia University), Lin Chen (Yale University), Mohamad Kazem Shirani Faradonbeh (University of Florida), Vidya Muthukumar (UC Berkeley), Zhaoran Wang (Northwestern University), Zhuoran Yang (Princeton University), Lin Yang (UCLA), Christina Yu (Cornell)
Visiting Graduate Students and Postdocs:
Shantanu Burnwal (IIT Hyderabad), Dylan Foster (MIT), Germano Gabbianelli (UPF), Yao Liu (Stanford), Aditya Modi (University of Michigan), Sean Sinclair (Cornell), Ruosong Wang (CMU), Chen-Yu Wei (USC), Andrea Zanette (Stanford)
Those interested in participating in this program should send an email to the organizers at this rl2020 [at] lists.simons.berkeley.edu (at this address).