Organizers: Lihong Li (Google Brain; chair), Marc Bellemare (Google Brain)
The success of deep neural networks in modeling complicated functions has recently been applied by the reinforcement learning community, resulting in algorithms that are able to learn in environments previously thought to be much too large. Successful applications span domains from robotics to health care. However, the success is not well understood from a theoretical perspective. What are the modeling choices necessary for good performance, and how does the flexibility of deep neural nets help learning? This workshop will connect practitioners to theoreticians with the goal of understanding the most impactful modeling decisions and the properties of deep neural networks that make them so successful. Specifically, we will study the ability of deep neural nets to approximate in the context of reinforcement learning.
All events take place in the Calvin Lab auditorium.
Further details about this workshop will be posted in due course. Enquiries may be sent to the organizers workshop-rl1 [at] lists.simons.berkeley.edu (at this address).