Knowledge of the zeros of a statistical mechanics partition function is mainly useful for telling us what singularities {\it do not} occur as some parameter (such as magnetic field) is varied. Reflection positivity, on the other hand, can sometimes tell us what singularities do occur, especially as concerns phase transitions and long-range-order. (Exceptionally, both are valid for the Ising model.) I will not say anything about zeros, but I will try to explain RP and give one example in which it works: how RP proves alignment of interacting dimers (with O. Heilmann).

### Monday, March 18th, 2019

Motivated by a connection with the Lovasz Local Lemma, we design an algorithm to compute the multivariate independence polynomial of a graph in the "Shearer region". More generally, we can compute the independent polynomial in a natural complex extension of the Shearer region. A contribution of technical interest is a multivariate version of the correlation decay method.

No abstract available.

The Ising model was one of the first spin systems (or graphical models) to be studied. Over the years, it has been used as a model for several phenomena rather different from its origins in statistical mechanics. Two important questions in many such applications are the related problems of sampling from the distribution given by the model, and the (approximate) computation of the so called "partition function" of the model. This talk will will look at methods for the deterministic approximation of the partition function of the Ising model, and its connections with notions of phase transitions in statistical physics. Based on joint work with Jingcheng Liu and Alistair Sinclair.

The six-vertex model and its generalization the eight-vertex model originate and are widely studied in statistical physics, as a family of vertex models for crystal lattices with hydrogen bonds. In the language of graph theory, these models are defined over 4-regular graphs. The states of the six-vertex model are (weighted) Eulerian orientations of the underlying graph, and the states of the eight-vertex model are (weighted) even orientations in which there is an even number of arrows into (and out of) each vertex. We study the classification of the approximation complexity of the partition functions of the six- and the eight-vertex models on all 4-regular graphs. Our complexity results conform to the phase transition phenomenon from physics and show tight connections with the complexity of approximate counting perfect matchings (on not necessarily bipartite graphs)---a central open problem in this field. In particular we give a characterization for all arity 4 functions realizable by (non-planar) matchgates, using a geometric proof. In this talk, I will outline some of our findings and techniques.

### Tuesday, March 19th, 2019

In this talk we discuss the position of zeros of some polynomials of combinatorial nature. These are partition functions (Lee-Yang circle theorem) or graph-counting polynomials (Hellman-Lieb theorem and generalizations). We base our discussion on a simple lemma (Asana-Ruelle).

In this talk we introduce and study a partition function that counts degree-constrained subgraphs of a finite graph. In particular, our partition function generalizes the matching polynomial. Surprisingly, it is also possible to count degree-constrained orientations with the same partition function. In particular, we give a short proof to a theorem of A. Schrijver concerning a lower bound on the number of Eulerian orientations of a graph with only even degrees. We will discuss some applications to matchings too. This is joint work with Márton Borbényi.

In this talk, I will present a generalization of Gurvits' capacity. This generalization is arrived at by viewing Gurvits' capacity through the lens of entropy. Subsequently, results from the geometry of polynomials and convex optimization are combined to show that this generalized capacity function can provide a deterministic approximate counting algorithm for a large class of discrete counting problems. Based on joint works with Damian Straszak.

The permanent of a Gaussian matrix is known to be #P-hard to approximate in the worst case. It is also #P-hard to compute exactly on average. Should we expect that it remains #P-hard to approximate on average? In this work we take a first step towards resolving this question: We present a quasipolynomial time deterministic algorithm for approximating the permanent of a typical n x n random matrix with unit variance and vanishing mean µ = 1/ polyloglog (n) to within inverse polynomial multiplicative error. Alternatively, one can achieve permanent approximation for matrices with mean µ=1/polylog(n) in sub-exponential time. The proposed algorithm significantly extends the regime of matrices for which efficient approximation of the permanent is known. This is because unlike previous algorithms which require a stringent correlation between the signs of the entries of the matrix it can tolerate random ensembles in which this correlation is negligible (albeit non-zero). Among important special cases we note: 1. Biased Gaussian: each entry is a complex Gaussian with unit variance 1 and mean µ. 2. Biased Bernoulli: each entry is ?1 + µ with probability 1/2, and 1 with probability 1/2. These results counter the common intuition that the difficulty of computing the permanent, even approximately, stems merely from our inability to treat matrices with many opposing signs. The Gaussian ensemble approaches the threshold of a conjectured hardness of computing the permanent of a zero-mean Gaussian matrix. This conjecture is one of the baseline assumptions of the BosonSampling paradigm that has received vast attention in recent years in the context of quantum supremacy experiments. We furthermore show that the permanent of the biased Gaussian ensemble is #P-hard to compute exactly on average. To our knowledge, this is the first natural example of a counting problem that becomes easy only when average case and approximation are combined. On a technical level, our approach stems from a recent approach taken by Barvinok who used Taylor series approximation of the logarithm of a certain univariate polynomial related to the permanent. Our main contribution is to introduce an average-case analysis of such related polynomials.

The notion of the capacity of a polynomial was introduced by Gurvits around 2005, originally to give drastically simplified proofs of the Van der Waerden lower bound for permanents of doubly stochastic matrices and Schrijver's inequality for perfect matchings of regular bipartite graphs. Since this seminal work, capacity has been utilized to bound various combinatorial quantities and to give polynomial-time algorithms to approximate such quantities. These types of results are often proven by giving bounds on how much a particular differential operator can change the capacity of a given polynomial. In this talk, we use this method to give a new proof of Csikvári's lower bound on the number of size-k matchings of a biregular bipartite graph. On the way, we will present tight capacity preservation bounds for real stability preservers, and discuss the implications of this result beyond that of counting matchings.

### Wednesday, March 20th, 2019

One fairly standard version of the Riemann Hypothesis (RH) is that a specific probability density on the real line has a moment generating function (Laplace transform) that as an analytic function on the complex plane has all its zeros pure imaginary. In statistical physics, a theorem of Lee and Yang from the 1950s provides a way to generate probability densities with that same property. How closely these two topics are related to each other is of some interest. We'll review a series of results that span the period from the 1920's to now concerning a perturbed version of the RH which demonstrate at least an historical relation. In the perturbed version, due to Polya, the log of the probability density is modified by a quadratic term. This gives rise to an implicitly defined real constant known as the de Bruijn-Newman Constant, Lambda. The conjecture and now theorem (Newman 1976, Rodgers and Tau 2018) that Lambda is greater than or equal to zero is complementary to the RH which is equivalent to Lambda less than or equal to zero; The conjecture/theorem is a version of the dictum that the RH, if true, is only barely so. Until very recently, the best upper bound, was a 2009 result of Ki, Kim and Lee that Lambda is strictly less than 1/2. The current upper bound (see Polymath 15) is around 0.22.

We study the problem of approximating the matching polynomial on graphs with edge parameter $\gamma$, where $\gamma$ takes arbitrary values in the complex plane. When $\gamma$ is a positive real, Jerrum and Sinclair showed that the problem has an FPRAS on general graphs. For general complex values of $\gamma$, Patel and Regts, building on methods developed by Barvinok, showed that the problem has an FPTAS on graphs of maximum degree $\Delta$ as long as $\gamma$ is not a negative real number less than or equal to $-1/(4(\Delta-1))$. Our first result completes the picture for the approximability of the matching polynomial on bounded degree graphs. We show that for all $\Delta\geq 3$ and all real $\gamma$ less than $-1/(4(\Delta-1))$, the problem of approximating the value of the matching polynomial on graphs of maximum degree $\Delta$ with edge parameter $\gamma$ is \#P-hard. We then explore the extent to which the maximum degree parameter can be replaced by the connective constant. Sinclair et al. showed that, for positive real $\gamma$, it is possible to approximate the value of the matching polynomial using a correlation decay algorithm on graphs with bounded connective constant (and potentially unbounded maximum degree). We first show that this result does not extend in general in the complex plane; in particular, the problem is \#P-hard on graphs with bounded connective constant for a dense set of $\gamma$ values on the negative real axis. Nevertheless, we show that the result does extend for any complex value $\gamma$ that does not lie on the negative real axis. Our analysis accounts for complex values of $\gamma$ using geodesic distances in the complex plane in the metric defined by an appropriate density function. Joint work with Ivona Bezakova, Andreas Galanis, and Daniel Stefankovic.

I will present a framework for deterministic approximate counting in discrete structures where linear optimization is easy; these include matroids and their pairwise intersections. The key to the performance of this framework is a set of inequalities lower and upper bounding the entropy of probability distributions on these structures in terms of entropies of their projections/marginals. The lower bounds can be though of as an analog of the Van der Waerden's lower bound on the permanent of doubly stochastic matrices, and are related to log-concave polynomials. The upper bounds are related to the subadditivity of entropy. Time-permitting I will also discuss improved versions of these inequalities in the important special case of bipartite perfect matchings/permanent, which has led to the current best approximation ratio of 2^{n/2}. Based on joint works with Shayan Oveis Gharan, Cynthia Vinzant, and Alireza Rezaei.

The Ising model originated in statistical physics, where it was used to model magnetic objects. A classical result of Lee and Yang says that absence of zeros of the partition function implies no phase transition, i.e., analytic dependence on the parameters. More recently, it became clear that absence of zeros also implies the existence of efficient approximation algorithms. Another classical result of Lee and Yang confines the zeros of the partition function of the Ising model to the unit circle in the complex plane. In this talk I will give a precise description of the location of these zeros for the class of bounded degree graphs. This description is obtained using algorithmic techniques and ideas from complex dynamical systems. Based on joint work with Han Peters.

I will first explain the importance of the permanent of PSD matrices in Quantum Linear Optics, and put it into more general concept of so called quantum permanent of bipartite density matrices.

In this talk I will show how to get a c^n approximation algorithm for the permanent of PSD matrices, where c is a universal constant (prior to this result not even randomized poly-time algorithm with (even) worst case simply exponential relative error was known); and e^n approximation algorithm for the quantum permanent of separable, aka non-entangled, bipartite density matrices.

Our algorithm is based on a semidefinite program with a convex nonlinear objective. Our proof of its correctness is essentially a New Van Der Waerden Like Lower Bound: What is the smallest value of the permanent of projectors on images of n × n correlation matrices?

If time permits, I will talk about how our result can “save” the recently refuted permanent-on-top conjecture as well as its application to approximating the largest eigen-value of certain n! × n! PSD matrices.

Based on the joint work with Nima Anari, Shayan Oveis Gharan, Amin Saberi.

### Thursday, March 21st, 2019

The cluster expansion is a fundamental tool in mathematical physics that has many fruitful connections to other domains of mathematics, e.g., the Lovasz Local Lemma. In this talk I'll introduce abstract polymer models and the cluster expansion for their partition functions, and show how the cluster expansion yields efficient deterministic counting algorithms if the polymer model satisfies certain conditions. I will give an application of this framework to the Potts and hard-core models on expander graphs at low temperature. Based on joint works with T. Helmuth, G. Regts, M. Jenssen, and P. Keevash.

We obtain FPTAS for counting the number of independent sets and colorings for almost every random d-regular bipartite graph when the degree d is sufficient large (as a function of number of colors q in the case of counting colorings). This extends a recent result by Jenssen, Keevash and Perkins (SODA 2019) and confirms an open question raised there.

Pirogov-Sinai theory was developed to understand discrete statistical mechanics models at low temperatures. In recent joint work with W. Perkins and G. Regts we have shown that Pirogov-Sinai theory is also useful for the development of efficient deterministic counting algorithms. I will introduce the basic ideas of the theory and the key notion of a contour model, a more sophisticated version of an abstract polymer model. Efficient algorithms can then be obtained by combining contour models with the cluster expansion tools introduced in the talk of W. Perkins. As applications of these ideas I will present efficient approximate counting algorithms for the Potts and hard-core models on Z^d at low temperatures and high densities, respectively.

I will describe an efficient algorithm for finding the ground state of an n-particle quantum system subject to a 1D local Hamiltonian with a spectral gap. This amounts to solving for the lowest eigenvector of a succinctly described linear operator on an exponential dimensional space -- the kind of challenge that certain counting problems also encounter. This is joint work with I. Arad, U. Vazirani, and T. Vidick.

Graphical models (GM) represent multivariate and generally not normalized probability distributions. Computing the normalization factor, called the partition function (PF), is the main inference challenge relevant to multiple statistical and optimization applications. The problem is of an exponential complexity with respect to the number of variables. In this manuscript, aimed at approximating the PF, we consider Multi-Graph Models (MGMs) where binary variables and multivariable factors are associated with edges and nodes, respectively, of an undirected multi-graph. We suggest a new methodology for analysis and computations that combines the Gauge Function (GF) technique with the technique from the field of real stable polynomials. We show that the GF, representing a single-out term in a finite sum expression for the PF which achieves extremum at the so-called Belief-Propagation (BP) gauge, has a natural polynomial representation in terms of gauges/variables associated with edges of the multi-graph. Moreover, GF can be used to recover the PF through a sequence of transformations allowing appealing algebraic and graphical interpretations. Algebraically, one step in the sequence consists in application of a differential operator over gauges associated with an edge. Graphically, the sequence is interpreted as a repetitive elimination/contraction of edges resulting in MGMs on decreasing in size (number of edges) graphs with the same PF as in the original MGM. Even though complexity of computing factors in the sequence of derived MGMs and respective GFs grow exponentially with the number of eliminated edges, polynomials associated with the new factors remain bi-stable if the original factors have this property. Moreover, we show that BP estimations in the sequence do not decrease, each low-bounding the PF. https://arxiv.org/abs/1811.04713

### Friday, March 22nd, 2019

Negative dependence models "repelling particles" in discrete probability theory and statistical mechanics. We introduce and study Lorentzian polynomials and Lorentzian distributions, which are relaxations of stable polynomials and strong Rayleigh distributions. We prove that Lorentzian distributions satisfy several negative dependence properties, and apply the results to concrete problems in matroid theory, linear algebra and statistical mechanics. This is joint work with June Huh.

I will present a fully polynomial-time approximation scheme (FPTAS) to count the number of q-colorings for k-uniform hypergraphs with maximum degree D if k>=28 and q>315D^{14/(k?14)}, as well as an accompanying sampling algorithm with slightly worse conditions. These are the first approximate counting and sampling algorithms in the regime q<<? (for large ? and k) without any additional assumptions. In these regimes, frozen colourings are present, forming an obstacle to the standard Markov chain Monte Carlo approach. Instead, our method is based on the recent work of Moitra (STOC, 2017), in which Lovasz Local Lemma is the key ingredient. Joint work with Chao Liao, Pinyan Lu, Chihao Zhang.

In this talk I will introduce certain identities for independence polynomials, that are similar to the Christoffel--Darboux identities used in the theory of orthogonal polynomials. As an application, we will see that some conditions on the induced subgraphs of a graph G translates to results on zero-free regions of the independence polynomial of G. In particular we give a new proof of a theorem of Chudnovsky and Seymour, which states that the independence polynomial of a claw-free graph has only real zeros.

There has been considerable success in using combinatorial techniques to rigorously identify computational complexity transitions in statistical mechanical models. Recently developed techniques have provided the ability to study the complexity of these models with complex parameters, which are closely related to probability amplitudes in quantum physical models. In this talk, we present some results on how these techniques can be applied to identify computational complexity transitions in quantum physics. Our first result is an efficient deterministic algorithm for approximating certain output probability amplitudes of a class of commuting quantum circuits when the interaction graph has bounded maximum degree d and the interactions are absolutely bounded from above by Omega(1/d) (see arXiv:1806.11282). Key to our algorithm is an approach due to Barvinok for approximating evaluations of a polynomial based on the location of the complex zeros and a technique due to Patel and Regts for efficiently computing coefficients of graph polynomials on bounded degree graphs. Our second result is a hardness proof for these amplitudes when the interactions are bounded from above by O(1/d). Specifically, we show that additive-error approximations are BQP-hard and relative-error approximations are #P-hard. These results demonstrate quantum computational complexity transitions at Theta(1/d), where additive-error approximations transition from P to BQP-hard and relative-error approximations transition from P to #P-hard. This is joint work with Michael Bremner.