Abstract
Reinforcement Learning (RL) problems for continuous state and action space systems are among the most challenging in RL. Recently, deep reinforcement learning methods have been shown to be quite effective for certain RL problems in settings of very large/continuous state and action spaces. But such methods require extensive hyper-parameter tuning, huge amount of data, and come with no performance guarantees. We note that such methods are mostly trained `offline’ on experience replay buffers. In this talk, I will describe a series of simple reinforcement learning schemes for various settings. Our premise is that we have access to a generative model that can give us simulated samples of the next state. I will introduce the RANDPOL (randomized function approximation for policy iteration) algorithm, an empirical actor-critic algorithm that uses randomized neural networks that can successfully solve a tough robotic problem with continuous state and action spaces. We also provide theoretical performance guarantees for the algorithm. Specifically, it allows for arbitrarily good approximation with high probability for any problem. I will also touch upon the probabilistic contraction analysis framework of iterative stochastic algorithms that underpins the theoretical analysis. This talk is based on work with Hiteshi Sharma (Microsoft).