Abstract

The computational complexity of inference problems, and average-case problems in general, displays a curious behavior. In some domains (notably cryptography) it seems that computational hardness is the norm, and "problems are hard unless  proven otherwise", while in other domains, and in particular machine learning, even non-convex problems are routinely solved leading some to proclaim that "P=NP in practice". 

In this talk we will discuss what are the tools known and questions that need to be resolved in order to achieve a coherent worldview that encapsulates both these observations. We will focus on constraint-satisfaction problems, with 2XOR and 3XOR in particular, presenting some of the ways we have to argue about the (in)tractability of average-case problems. In particular we will define the search, decision, sampling, counting, and refutation variants of average-case problems, and discuss both common algorithms to solve them as well as ways to argue about limitations of such algorithms. These will include (time permitting) the statistical query model, low-degree likelihood, Monte-Carlo Markov Chain, Sum of Squares, Belief Propagation, and more.  Given time restriction, we will not be able to go into detail in any model but will give a birds' eye view of how these relate to one another, and also discuss how relevant are these idealized problems to the modern "deep learning" setting. 

Video Recording