Abstract

We present DL2, a system for training and querying neural networks with logical constraints. Using DL2, one can declaratively specify domain knowledge to be enforced during training or pose queries on the model with the goal of finding inputs that satisfy a set of constraints. DL2 works by translating logical constraints into a differentiable loss with desirable mathematical properties, then minimized with standard gradient-based methods. As an application we show how DL2 can be employed to enforce individual fairness constraints in neural networks.

Bio: Marc Fischer is a PhD student in the Computer Science department at ETH Zurich, supervised by Prof. Martin Vechev. His main area of interest is the reliability of neural networks. In particular he studies properties such as their robustness and interpretability.
 

Video Recording