Abstract

Human-in-the-loop machine learning is a widely adopted paradigm for instilling human knowledge in autonomous agents. Many design choices influence the efficiency and effectiveness of such interactive learning processes, particularly the interaction type through which the human teacher may provide feedback. While different interaction types (demonstrations, preferences, etc.) have been proposed and evaluated in the HIL-ML literature, there has been little discussion of how these compare or how they should be selected to best address a particular learning problem. In this talk, I will introduce an organizing principle for interactive machine learning that provides a way to analyze the effects of interaction types on human performance and training data. I will also identify open problems in understanding the effects of interaction types.

Video Recording