Abstract

In this talk, I will introduce probabilistic soft logic (PSL), a declarative probabilistic programming language for collective inference from richly structured data. In PSL, logical dependencies are expressed using a collection of weighted rules, expressed in a declarative, datalog-like language. These weighted rules are then interpreted as potential functions in a probabilistic factor graph which we refer to as a hinge-loss Markov random field (HL-MRF). A unique property of HL-MRFs is that maximum a posteriori (MAP) inference is convex; this makes collective inference from richly structured data highly scalable. HL-MRFs unify three different approaches to convex inference: LP approximations for randomized algorithms for solving MAXSAT, local relaxations for probabilistic graphical models, and inference in soft logic. I will show that all three lead to the same inference objective. HL-MRFs typically have richly connected yet sparse dependency structures, and I will describe an inference algorithm that exploits the fine-grained dependency structures and is much more scalable than general-purpose convex optimization approaches.

Video Recording