Neurosymbolic learning is an emerging paradigm which, at its core, combines the otherwise complementary worlds of classical algorithms and deep learning; in doing so, it ushers in more accurate, interpretable, and domain-aware solutions for today's most complex machine learning challenges. I will begin by reviewing the various fundamentals, such as algorithmic supervision, symbolic reasoning, and differentiable programming, which have defined this intersection thus far. I will then present Scallop, a general-purpose programming language that allows for a wide range of modern neurosymbolic learning applications to be written and trained in a data and compute efficient manner. Scallop is able to achieve these goals through three salient overarching design decisions: 1) a flexible symbolic representation that is based on the relational data model; 2) a declarative logic programming language that builds on Datalog; and 3) a framework for automatic and efficient differentiable reasoning that is based on the theory of provenance semirings. I will present case studies demonstrating how Scallop expresses algorithmic reasoning in a diverse and challenging set of AI tasks, provides a succinct interface for machine learning programmers to integrate logical domain-specific knowledge, and outperforms state-of-the-art deep neural network models in terms of accuracy and efficiency. This is joint work with PhD students Ziyang Li and Jiani Huang.


Video Recording