Abstract

We introduce the problem of active causal structure learning with advice. In the typical well-studied setting, the learning algorithm is given the essential graph for the observational distribution and is asked to recover the underlying causal directed acyclic graph (DAG) G* while minimizing the number of interventions made. In our setting, we are additionally given side information about G* as advice, e.g. a DAG G purported to be G*, and we ask whether the learning algorithm can benefit from the advice when it is close to correct, while still having worst-case guarantees even when the advice is arbitrarily bad. Our work is in the same space as a growing body of research on algorithms with predictions. When the advice is a DAG G, we design an adaptive search algorithm to recover G* whose intervention cost is at most O(log ψ) times the cost for verifying G; here, ψ is a distance measure between G and G* that is bounded by the number of variables and is 0 when G=G*. Our approximation factor matches the state-of-the-art for the advice-less setting. Joint work with Davin Choo and Themistoklis Gouleakis (both program participants)