Abstract

An important challenge in robotics is how to design control algorithms for human-robot interaction. One approach is to model the human and robot as playing a dynamic game, and then search for Nash equilibria of that game. We study the problem of ensuring safety in such a setting. Our approach makes minimal assumptions about the human objective, except that they always act in a way to avoid unsafe states. This assumption implies that the human always considers taking certain backup actions to avoid an accident. We design an algorithm that uses abstract interpretation to overapproximate the human's reachable set under these backup actions, and then plans in a way that the robot avoids this reachable set. We prove that under our assumption, as long as the human plays an optimal strategy, then our approach guarantees safety. We evaluate our approach in a user study where humans interact with a toy driving simulator, demonstrating that our controller ensures safety while allowing the robot to undertake aggressive maneuvers.

Attachment

Video Recording