Symmetries play a unifying role in physics and many other sciences. In deep learning, symmetries have been incorporated into neural networks through the concept of equivariance. One of the major benefits is that it will reduce the number parameters through parameter sharing and as such can learn with less data. In this talk I will ask the question, can equivariance also help in RL? Besides the obvious idea of using equivariant value functions, we explore the idea of deep equivariant policies. We make a connection between equivariance and MDP homomorphisms, and generalize to distributed multi-agent settings.
Joint work with Elise van der Pol (main contributor), Herke van Hoof and Frans Oliehoek.