Doina Precup (McGill Univeristy & MILA / DeepMind)
The combination of reinforcement learning with deep learning is a promising approach to tackle important sequential decision-making problems that are currently intractable. One obstacle to overcome is the amount of data needed by learning systems of this type. Fortunately, complex decision problems can be naturally decomposed into multiple tasks that unfold in sequence or in parallel. By associating each task with a reward function, this problem decomposition can be seamlessly accommodated within the standard reinforcement learning formalism. In this talk, I will discuss generalizations of two fundamental operations in reinforcement learning, policy improvement and policy evaluation, which allow one to leverage the solution of some tasks to speed up the solution of other tasks. If the reward function of a task can be well approximated as a linear combination of the reward functions of tasks previously solved, we can reduce a reinforcement learning problem to a simpler linear regression. When this is not the case, the agent can still exploit the task solutions by using them to interact with and learn about the environment. Both strategies considerably reduce the amount of data needed to solve a reinforcement-learning problem. Joint work with Andre Barreto, Shaobo Hou, Diana Borsa and David Silver.