Description

Speaker: Christina Yu (Cornell University)

Title: Adaptive Discretization For Reinforcement Learning
Abstract: We introduce the technique of adaptive discretization to design efficient model-free and model-based episodic reinforcement learning algorithms in large (potentially continuous) state-action spaces. We provide worst-case regret bounds for our algorithms, which are competitive compared to the state-of-the-art algorithms. Our algorithms have lower storage and computational requirements due to maintaining a more efficient partition of the state and action spaces. We illustrate this via experiments on several canonical control problems, which shows that our algorithms empirically perform significantly better than fixed discretization in terms of both faster convergence and lower memory usage.

This is joint work with Sean Sinclair, Tianyu Wang, Gauri Jain, and Siddhartha Banerjee.

 

All scheduled dates:

Upcoming

No Upcoming activities yet

Past