Abstract

Today's dominant approach for modeling complex tasks involving human language calls for training neural networks using massive datasets. While the agenda is undeniably successful, we may not have the luxury of annotated data for every task or domain of interest. Reducing dependence on labeled examples may require us to rethink how we supervise models.

In this talk, I will describe recent work where we use knowledge to inform neural networks without introducing additional parameters. Declarative rules stated in logic can be systematically compiled into computation graphs that augment the structure of neural models, and also into regularizers that can use labeled or unlabeled examples. I will present experiments on text understanding tasks, which show that such declaratively constrained neural networks are not only more accurate, but also more consistent in their predictions across examples.

Video Recording