Abstract
First-order model counting (FOMC) is the task of counting models of a first-order logic sentence over a given set of domain elements. Its weighted variant, WFOMC, generalizes FOMC by assigning weights to the models and has applications among others in statistical relational learning. More than ten years of research by various authors has led to identification of non-trivial classes of WFOMC problems that can be solved in time polynomial in the number of domain elements. This talk is about recent developments on WFOMC and the related problem of weighted first-order model sampling (WFOMS). I will also discuss some applications of WFOMC and WFOMS, e.g., automated solving of problems from enumerative combinatorics and elementary probability theory. Finally, I will also describe how one could use the framework of WFOMS as a basis for a declarative framework for sampling combinatorial structures beyond what is provided nowadays in libraries of mainstream programming languages - this last application calls for more research on representations supporting efficient WFOMS. The talk is mostly based on the following papers: https://arxiv.org/abs/2302.02730, https://jair.org/index.php/jair/article/view/12320/26673, https://proceedings.kr.org/2021/57/kr2021-0057-van-bremen-et-al.pdf, https://ojs.aaai.org/index.php/AAAI/article/view/26449 and https://arxiv.org/abs/2302.04606.