# Abstracts

### Tuesday, April 30th, 2019

9:30 am10:15 am
Speaker: James Saunderson (Monash University)

I will discuss a way to construct, from a hyperbolic polynomial and a direction of hyperbolicity, families of nonnegative polynomials that can be optimized over via hyperbolic programming. We call a hyperbolic polynomial "SOS-hyperbolic" if all the nonnegative polynomials produced by this construction are sums of squares. Of most interest are the hyperbolic polynomials that are not SOS-hyperbolic, since these could lead to hyperbolic programming relaxations of optimization problems that are, in some sense, more powerful than semidefinite relaxations. After introducing these general ideas, this talk will mostly focus on the case of hyperbolic cubics, discussing what is known (as well as open questions) about when they are, and are not, SOS-hyperbolic. On the way, we establish NP-hardness of deciding hyperbolicity of cubics, and give an example of an explicit hyperbolic cubic for which no power has a definite determinantal representation.

10:30 am11:15 am
Speaker: Levent Tuncel (University of Waterloo)

We will present new primal-dual algorithms for hyperbolic programming problems. Our algorithms have many desired properties including polynomial-time iteration-complexity for computing an approximate solution (for well-posed instances). In the special case of dense SDPs (as well as for symmetric cone programming problems with self-scaled barriers), our algorithms specialize to Nesterov-Todd algorithm. However, for certain sparse SDPs, our algorithms are new and have additional desired properties. The talk is based on joint work with Lieven Vandenberghe.

11:45 am12:30 pm
Speaker: James Renegar (Cornell University)

First-order methods have dominated the literature in continuous optimization for well more than a decade, a consequence of these algorithms being capable of solving far larger problems than are within the reach of second-order methods (e.g., interior point methods), the critical difference being the amount of core memory (RAM) required in making an iteration. However, strong assumptions must be satisfied to ensure low memory requirements for first-order methods, perhaps the most notable being that the feasible region of an optimization problem is appropriately simple (e.g., a ball, a box, or a simplex), quite unlike the feasible region of a general hyperbolic program. Nevertheless, as we discuss, a hyperbolic program can be transformed into equivalent optimization problems to which standard first-order methods can be applied, with the feasible regions of the transformed problems being affine spaces, hence most certainly "appropriately simple". Moreover, procedures for computing (sub)gradients of the transformed problems can be determined by first considering the case of semidefinite programming, and then making judicious use of the Helton-Vinnikov Theorem. "Optimal" first-order methods result for hyperbolic programming.

2:15 pm3:00 pm
Speaker: Jiawang Nie (UC San Diego)

This talk presents a tight semidefinite programming relaxation for solving polynomial optimization. Optimality conditions in polynomial optimization are investigated. For general polynomials, we show that Lagrange multipliers can be expressed as polynomial functions in decision variables over the set of critical points. Based on these expressions, we construct tight semidefinite programming relaxations.

3:30 pm4:15 pm
Speaker: Jean-Bernard Lasserre (LAAS-CNRS)

Consider the compact sub-level set $K:=\{x: g(x)\leq1\}$ of a nonnegative homogeneous polynomial $g$. We show that its Lebesgue volume ${\rm vol}(K)$ can be approximated as closely as desired by solving a sequence of generalized eigenvalue problems with respect to a pair of Hankel matrices of increasing size, whose entries are obtained in closed form. The methodology also extends to compact sets of the form $\{x: a\leq g(x)\leq b\}$ for non-homogeneous polynomials with degree $d\ll n$. It reduces the volume computation in $R^n$ to a volume" computation in $R^d$ (where $d={\rm deg}(g)$) for a certain pushforward measure. Another extension to computing volumes of finite intersections of such sub-level sets is also briefly described.

### Wednesday, May 1st, 2019

9:30 am10:15 am
Speaker: Nima Anari (Stanford University)

In this talk I will showcase some applications of hyperbolic, and more generally log-concave, polynomials to discrete optimization problems. I will present a framework for approximately finding the maximum of a discrete function on the hypercube as long as this function is associated with coefficients of a log-concave polynomial. I will also discuss finding the maximum of a mixture of function values. Some specific problems solved by this framework include Nash welfare maximization, and determinant/volume maximization subject to a matroid constraint. Based on joint works with Shayan Oveis Gharan, Kuikui Liu, Amin Saberi, Mohit Singh, and Cynthia Vinzant.

10:30 am11:15 am
Speaker: Cynthia Vinzant (North Carolina State University)

Complete log-concavity is a generalization of hyperbolicity for multivariate polynomials that is closed related to notions of discrete convexity, including matroids, submodular functions, and generalized permutohedra. Many of the nice properties of hyperbolic polynomials extend to this more general class. While inspired by work of Adiprasito, Huh, and Katz on combinatorial Hodge theory, these polynomials can be defined and understood in elementary terms. I will introduce this class of polynomials, the underlying real algebraic geometry, and some important examples coming from combinatorics and convex optimization. This is based on joint work with Nima Anari, Kuikui Liu, and Shayan Oveis Gharan.

11:45 am12:30 pm
Speaker: Gabor Pataki (University of North Carolina at Chapel Hill)

Semidefinite programs (SDPs) are some of the most useful and versatile optimization problems to emerge in the last few decades. However, their duality theory is not as clearcut as that of linear programming: they may not attain their optimal value, and their optimal value may differ from that of their dual. Such SDPs are often difficult, or impossible to solve.

I will show that many of these pathologies can be understood using a surprisingly simple tool: we can transform SDPs to normal forms that makes the pathology trivial to recognize. The transformations mostly use elementary row operations coming from Gaussian elimination. The normal forms have computational uses, for example, in several cases they help us recognize infeasibility. The talk relies only on basic  linear algebra, and some elementary convex analysis. Some parts of this work are joint with Minghui Liu and some others with Yuzixuan Zhu and Quoc Tran-Dinh.

2:15 pm3:00 pm
Speaker: Mareike Dressler (UC San Diego)

Various key problems from theoretical computer science can be expressed as polynomial optimization problems over the boolean hypercube H. One particularly successful way to prove complexity bounds for these types of problems are based on sums of squares (SOS) as nonnegativity certificates. We initiate optimization over H via a recent, alternative certificate called sums of nonnegative circuit polynomials (SONC). We show that key results for SOS based certificates remain valid: First, for polynomials, which are nonnegative over the n-variate boolean hypercube H with constraints of degree at most d there exists a SONC certificate of degree at most n+d. Second, if there exists a degree d SONC certificate for nonnegativity of a polynomial over H, then there also exists a short degree d SONC certificate, that includes at most n^O(d) nonnegative circuit polynomials.

3:30 pm4:15 pm
Speaker: Dávid Papp (North Carolina State University)

We present a highly general interior point method for non-symmetric conic optimization and its Matlab implementation. The algorithm can solve optimization problems over a cone as long as the user can supply the gradient and Hessian of a logarithmically homogeneous self-concordant barrier for either the cone or its dual. We outline three applications: hyperbolic programming, sum-of-squares optimization without semidefinite programming, and bounding polynomials using nonnegative circuit polynomials.

### Thursday, May 2nd, 2019

9:30 am10:15 am
Speaker: Katharina Victoria Jochemko (KTH Royal Institute of Technology)

The number of lattice points in integer dilates of a lattice polytope is given by a polynomial --- the Ehrhart polynomial of the polytope. Ehrhart polynomials encode fundamental properties such as the volume and the dimension of the polytope and arise in various guises in many areas, for example, commutative algebra, representation theory and optimization. A central question that is widely open is to characterize Ehrhart polynomials. An important tool is the h*-polynomial which encodes the Ehrhart polynomial in a particular basis. Beck and Stapledon conjectured that the h*-polynomial of any integer dilate of a lattice polytope is real-rooted whenever the dilation factor exceeds the dimension of the polytope. In this talk I will introduce the basic concepts of Ehrhart theory, highlight current research questions and show how interlacing polynomials can be used to prove the conjecture.

10:30 am11:15 am
Speaker: Tselil Schramm (Harvard & MIT)

Conventional wisdom asserts that linear programs (LPs) perform poorly for max-cut and other constraint satisfaction problems, and that semidefinite programming and spectral methods are needed to give nontrivial bounds. In this talk, I will describe a recent result that stands in contrast to this wisdom: we show that surprisingly small LPs can give nontrivial upper bounds for constraint satisfaction problems. Even more surprisingly, the quality of our bounds depends on the spectral radius of the adjacency matrix of the associated graph. For example, in a random $\Delta$-regular $n$-vertex graph, the $\exp(c \frac{\log n}{\log \Delta})$-round Sherali--Adams LP certifies that the max cut has value at most $50.1\%$. In random graphs with $n^{1.01}$ edges, $O(1)$ rounds suffice; in random graphs with $n \cdot \log(n)$ edges, $n^{O(1/\log \log n)} = n^{o(1)}$ rounds suffice.

11:45 am12:30 pm
Speaker: Josephine Yu (Georgia Institute of Technology)

Positively hyperbolic varieties are analogues, in higher codimension, of hypersurfaces defined by stable polynomials. They are hyperbolic with respect to the nonnegative Grassmannian, following the notion of hyperbolicity studied by Shamovich, Vinnikov, Kummer, and Vinzant. We show that their tropicalization and Chow polytopes have nice combinatorial structures generalizing the results of Choe, Oxley, Sokal, Wagner, and Brändén on Newton polytopes and tropicalizations of stable polynomials. This is based on joint work with Felipe Rincón and Cynthia Vinzant.

2:15 pm3:00 pm
Speaker: Khazhgali Kozhasov (Max Planck Institute for Mathematics in the Sciences)

An Inverse power of a hyperbolic polynomial admits a representation as the Laplace transform of a distribution, called the Riesz kernel. In my talk I will present formulas for Riesz kernels for two classes of hyperbolic polynomials: products of linear forms and elementary symmetric polynomials. I will also discuss applications of our results to algebraic combinatorics and algebraic statistics.

This is a joint work with Mateusz Michalek and Bernd Sturmfels.

3:30 pm4:15 pm
Speaker: Caroline Uhler (Massachusetts Institute of Technology)

We study probability distributions that are multivariate totally positive of order two (MTP2). Such distributions appear in various applications from ferromagnetism to phylogenetics. We first describe some of the intriguing properties of such distributions with respect to conditional independence and graphical models. In the Gaussian setting, these translate into new statements about M-matrices that might be of independent interest to algebraists. We then consider the problem of non-parametric density estimation under MTP2, an infinite-dimensional optimization problem. Solving this problem requires us to develop new results in geometric combinatorics. In particular, we introduce bimonotone subdivisions of polytopes and show that the global optimum is a piecewise linear function that induces a bimonotone subdivision. This implies that the infinite-dimensional optimization problem can be reduced to a finite-dimensional convex optimization problem. In summary, MTP2 distributions not only have broad applications for data analysis, but also lead to interesting new problems in optimization, combinatorics, and geometry.

### Friday, May 3rd, 2019

9:00 am9:45 am
Speaker: Mohit Singh (Georgia Institute of Technology)

Selecting a diverse subset of items from a collection occurs as a fundamental problem in various applications including selecting representative documents from a corpus, selecting diverse geographical locations for sensor placement and designing most informative experiments. A model is to assign vectors to items and the diversity measure is then defined by spectral functions of natural matrices associated with these vectors. In this talk, I will outline approximation algorithms that aim to optimize these objective measures under combinatorial constraints on the set of items that can be selected. I will also outline the rich connections of the problems to many well-studied problems including counting matchings in graphs, graph sparsification as well as the theory of stable polynomials and positive and negative dependence in probability theory.

10:00 am10:45 am
Speaker: Nick Ryder (UC Berkeley)

The Generalized Lax Conjecture asks whether every hyperbolicity cone is a section of a semidefinite cone of sufficiently high dimension. We prove that the space of hyperbolicity cones of hyperbolic polynomials of degree d in n variables contains exponentially pairwise distant cones in a certain metric, and therefore that any semidefinite representation of such cones must have dimension at least exponential in n/d (even if a small approximation is allowed). The proof contains several ingredients of independent interest, including the identification of a large subspace in which the elementary symmetric polynomials lie in the relative interior of the set of hyperbolic polynomials, and quantitative versions of several basic facts about real rooted polynomials.

11:15 am12:00 pm
Speaker: Daniel Plaumann (TU Dortmund University)

By the Toeplitz-Hausdorff theorem in convex analysis, the numerical range of a complex square matrix is a convex compact subset of the complex plane. Kippenhahn's theorem describes the numerical range as the convex hull of an algebraic curve that is dual to a hyperbolic curve. For the joint numerical range of several matrices, the direct analogue of Kippenhahn's theorem is known to fail. We discuss the geometry behind these results and prove a generalization of Kippenhahn's theorem that holds in any dimension.

Joint work with Rainer Sinn and Stephan Weis.

12:15 pm1:00 pm
Speaker: Petter Brändén (KTH Royal Institute of Technology)

A polynomial is stable if it is nonzero whenever all variables have positive imaginary parts. We describe how linear preservers of stability may be characterized by their symbols, and how this characterization carries over to the space of so called Lorentzian polynomials. We also present a quantitative theorem that describes how the hyperbolicity cone is deformed by a linear preserver of stability.

This talk is based on joint work with Julius Borcea, June Huh and Adam W. Marcus.