Abstract

A policy is said to be robust if it maximizes the reward while considering a bad, or even adversarial, model. We consider two scenarios in which the agent attempts to perform an action a, and (i) with probability p, an alternative adversarial action a' is taken, or (ii) an adversary adds a perturbation to the selected action in the case of continuous action space. We show that our criteria are related to common forms of uncertainty in robotics domains, such as the occurrence of abrupt forces, and suggest algorithms in the tabular case. Building on the suggested algorithms, we generalize our approach to deep reinforcement learning (DRL) and show that not only does our approach produce robust policies, but it also improves the performance in the absence of perturbations. This generalization indicates that action-robustness can be thought of as implicit regularization in RL problems

Attachment

Video Recording