Abstract

Structured prediction is the task of jointly predicting correlated outputs; the core computational challenge is how to do so in time that is not exponential in the number of outputs without making assumptions on the correlation structure (such as linear chain assumptions). I will describe recent work we have done in developing novel algorithms based on an imitation learning view of the structured prediction problem, whose core goal is efficiency. I'll provide a unified analysis of a large family of approaches. I'll conclude with some recent work connecting these ideas to sequential models in deep learning, such as recurrent neural networks, and how to modify these approaches for a bandit feedback setting.

Video Recording