Abstract

So-called safe learning algorithms are typically viewed as inefficient and only of use when safety is a primary concern in a domain. However, in this talk I will argue that all RL researchers should care about safe learning methods, as they have the potential to help solve some of the largest open problems in RL. Specifically, I will discuss connections between safe learning methods and efficient exploration, learning stability, and human usability, making a case that safe learning methods may play a large role in the future of RL.

Video Recording