![Theory of Reinforcement Learning_hi-res logo](/sites/default/files/styles/workshop_banner_sm_1x/public/2023-05/Theory%20of%20Reinforcement%20Learning_hi-res.png.jpg?itok=LJoCeC_d)
Abstract
So-called safe learning algorithms are typically viewed as inefficient and only of use when safety is a primary concern in a domain. However, in this talk I will argue that all RL researchers should care about safe learning methods, as they have the potential to help solve some of the largest open problems in RL. Specifically, I will discuss connections between safe learning methods and efficient exploration, learning stability, and human usability, making a case that safe learning methods may play a large role in the future of RL.