Abstract

We explore the problem of step-wise explaining how to solve constraint satisfaction problems, with a use case on logic grid puzzles. More specifically, we study the problem of explaining the inference steps that one can take during propagation, in a way that is easy to interpret for a person. Thereby, we aim to give the constraint solver explainable agency, which can help in building trust in the solver by being able to understand and even learn from the explanations. The main challenge is that of finding a sequence of simple explanations, where each explanation should aim to be as cognitively easy as possible for a human to verify and understand.

The talk consists of two main parts.

In the first part, I will guide the audience through our recently published work (ECAI20, and a paper accepted for AIJ), giving the motivation, intuitions, and example of step-wise explanations, as well as introducing a notion of _nested_ explanations (which even break up a single step in an explanation sequence) and a basic algorithm to compute explanations

In the second part,  we take a step back and investigate how to not just generate _good_, but _optimal_ explanations, as well as how to generate them _efficiently_. We will see that this relates to the OCUS problem: the problem of finding an unsatisfiable subset that is optimal among all unsatisfiable subsets satisfying certian meta-constraints. Finally, we will see that a well-known hitting set duality can be used to efficiently find OCUSs.

Video Recording