This talk is aimed to give a gentle, high level introduction to Invariant Theory, by describing some of its main objects, problems and results. This study of what *does not* change in certain dynamic processes interacts with many mathematical fields including group theory, commutative algebra and algebraic geometry, and is a central to modern physics. I will stress some of the many facets which interact with computational complexity and optimization. In particular, we'll see how natural objects, problems and results familiar in computer science appear naturally in this field. No special background is assumed.

### Monday, December 3rd, 2018

This talk will give an overview of some recent developments and concrete open problems in the GCT program.

The permanent versus determinant conjecture is a major problem in complexity theory that is equivalent to the separation of the complexity classes VP and VNP. Mulmuley and Sohoni suggested to study a strengthened version of this conjecture over the complex numbers that amounts to separating the orbit closures of the determinant and padded permanent polynomials. In that paper it was also proposed to separate these orbit closures by exhibiting occurrence obstructions, which are irreducible representations of GL_{n^2}(C) that occur in one coordinate ring of the orbit closure, but not in the other. We prove that this approach is impossible. However, this does not rule out the general approach to the permanent versus determinant problem via multiplicity obstructions. (Joint work with Ikenmeyer and Panova)

This talk will be based on a joint paper with Nicolas Ressayre (https://arxiv.org/pdf/1807.03663.pdf). The paper is devoted to the factorization of multivariate polynomials into products of linear forms, a problem which has applications to differential algebra, to the resolution of systems of polynomial equations and to Waring decomposition (i.e., decomposition in sums of d-th powers of linear forms; this problem is also known as symmetric tensor decomposition). We provide three black box algorithms for this problem. Our main contribution is an algorithm motivated by the application to Waring decomposition. This algorithm reduces the corresponding factorization problem to simultaenous matrix diagonalization, a standard task in linear algebra. The algorithm relies on ideas from invariant theory, and more specifically on Lie algebras. Our second algorithm reconstructs a factorization from several bi-variate projections. Our third algorithm reconstructs it from the determination of the zero set of the input polynomial, which is a union of hyperplanes.

How hard is it to give short, efficiently verifiable certificates that formulas in conjunctive normal form (CNF) are unsatisfiable, i.e., that

any truth value assignment to the variables must falsify at least one clause? This question is the focus of proof complexity, and a rich body of work has developed ingenious, highly intricate methods for proving strong lower bounds for a variety of different proof systems.

In this talk, we present a simple way to prove optimal, exponential lower bounds on proof size --- without even knowing any proof complexity! --- for two well-studied proof systems, namely resolution and polynomial calculus. Given a CNF formula, it is enough to construct a bipartite graph on the clauses and variables such that (a) this graph is an expander (i.e., is very well-connected) and (b) there is a winning strategy in a simple 2-player game played on any edge. This also gives a unified view of resolution and polynomial calculus lower bounds in that the only difference between the two proof systems is which player moves first in the game.

Using this method (which relies heavily on techniques developed in [Ben-Sasson and Wigderson '01] and [Alekhnovich and Razborov '03]), we prove that the functional pigeonhole principle (FPHP) formulas are hard for polynomial calculus, answering an open question in [Razborov '02]. Building on this, we construct k-colouring instances which require polynomial calculus proofs of linear degree, and hence exponential size. This implies a linear degree lower bound for Gröbner basis algorithms, as well as for the algorithm studied in a sequence of papers [De Loera et al. 08, '09, '11, '15] based on Hilbert's Nullstellensatz proofs, thus resolving an open problem mentioned, e.g., in [De Loera et al. '09] and [Li et al. '16].

Based on joint work with Massimo Lauria and Mladen Miksa.

### Tuesday, December 4th, 2018

We study the known techniques for designing Matrix Multiplication algorithms. The two main approaches are the Laser method of Strassen,

and the Group theoretic approach of Cohn and Umans. We define a generalization based on zeroing outs which subsumes these two

approaches, which we call the Solar method, and an even more general method based on monomial degenerations, which we call the Galactic

method.

We then design a suite of techniques for proving lower bounds on the value of ω, the exponent of matrix multiplication, which can be

achieved by algorithms using many tensors T and the Galactic method. Some of our techniques exploit `local' properties of T, like finding a

sub-tensor of T which is so `weak' that T itself couldn't be used to achieve a good bound on ω, while others exploit `global' properties,

like T being a monomial degeneration of the structural tensor of a group algebra. Our main result is that there is a universal constant ℓ>2 such that a large class of tensors generalizing the Coppersmith-Winograd tensor CW_q cannot be used within the Galactic method to show a bound on ω better than ℓ, for any q. We give evidence that previous lower-bounding techniques were not strong enough to show this. We also prove a number of complementary results along the way, including that for any group G, the structural tensor of C[G] can be used to recover the best bound on ω which the Coppersmith-Winograd approach gets using CW_{|G|−2} as long as the asymptotic rank of the structural tensor is not too large.

We study the problem of deterministic factorization of sparse polynomials. We show that if f \in \F[x1,…,xn] is a polynomial with s monomials, with individual degrees of its variables bounded by d, then f can be deterministically factored in time s^{O(\poly(d)log n)}.

Prior to our work, the only efficient factoring algorithms known for this class of polynomials were randomized, and other than for the cases of d=1 and d=2, only exponential time deterministic factoring algorithms were known. A crucial ingredient in our proof is a quasi-polynomial sparsity bound for factors of sparse polynomials of bounded individual degree. In particular we show if f is an s-sparse polynomial in n variables, with individual degrees of its variables bounded by d, then the sparsity of each factor of f is bounded by s^{O(d^2 logn)}. This is the first nontrivial bound on factor sparsity for d>2. Our sparsity bound uses techniques from convex geometry, such as the theory of Newton polytopes and an approximate version of the classical Caratheodory's Theorem.

Our work addresses and partially answers a question of von zur Gathen and Kaltofen (JCSS 1985) who asked whether a quasi-polynomial bound holds for the sparsity of factors of sparse polynomials. This is joint work with Shubhangi Saraf and Ilya Volkovich.

We give a deterministic polynomial-time algorithm for the following problem: given a tuple of square matrices A_1, ?, A_m, decide whether there exist invertible matrices B and C, such that every BA_iC is (skew-)symmetric. This can be cast as an instance of symbolic determinant identity testing, so our algorithm derandomizes this problem when the underlying field is large enough. The algorithm is inspired by those for the module isomorphism testing problem (Brooksbank-Luks, Ivanyos-Karpinski-Saxena), and the analysis relies crucially on the structure of *-algebras, namely algebras with an anti-automorphism of order at most 2. We also explain why this problem is interesting in light of the recent progress on the non-commutative rank problem. Based on joint work with Gábor Ivanyos, arXiv:1708.03495.

Geometric Complexity Theory of Mulmuley and Sohoni aimed to establish lower bounds via the separation of orbit closures of different polynomials: in the case of VP vs VNP this is equivalent to separating the orbit (under general linear group actions) of the permanent from a polynomially sized determinant. The VP vs VNP separation could also be achieved [Nisan] by replacing the determinant polynomial by a matrix power.

The separation of these orbits can be done at the level of their GL-representations and finding occurrence or multiplicity obstructions -- specific irreducible components appearing with distinct multiplicities in the two orbits under consideration. In [Burgisser-Ikenmeyer-Panova, FOCS'16, JAMS'18] we established that occurrence obstructions (i.e. zero multiplicities for one orbit, but positive for another) for the permanent versus determinant do not occur for determinants of superpolynomial size and hence VP cannot be separated from VNP this way.

In this talk we will discuss how such multiplicities can be studied via classical quantities from combinatorial representation theory (Kronecker and plethysm coefficients) and apply them to establish explicit lower bounds for the non-occurrence of obstructions in the orbits for the permanent versus determinant model ([Ikenmeyer-Panova, FOCS'16, Adv. Math'17]) and also for the orbits of permanent versus matrix power ([Gesmundo-Ikenmeyer-Panova, Diff. Geom. & Its Appl'17]). We will also show how complexity theory can lead to new representation theoretic results and how to approach some other lower bounds problems (like matrix multiplication) with these techniques.

We present several recent results in geometric complexity theory on the relationship between orbits, their closures, fundamental invariants, occurrence obstructions, and multiplicity obstructions. This talk covers results from work with Bürgisser, with Dörfler and Panova, and with Kandasamy.

Algebraic natural proofs were recently introduced by Forbes, Shpilka and Volk (Proc. of the 49th Annual Symposium on Theory of Computing (STOC), pages {653--664}, 2017) and independently by Grochow, Kumar, Saks and Saraf~(CoRR, abs/1701.01717, 2017) as an attempt to transfer Razborov and Rudich's famous barrier result (J. Comput. Syst. Sci., 55(1): 24--35, 1997) for Boolean circuit complexity to algebraic complexity theory. Razborov and Rudich's barrier result relies on a widely believed assumption, namely, the existence of pseudo-random generators. Unfortunately, there is no known analogous theory of pseudo-randomness in the algebraic setting. Therefore, Forbes et al.~use a concept called succinct hitting sets instead. This assumption is related to polynomial identity testing, but it is currently not clear how plausible this assumption is. Forbes et al.~are only able to construct succinct hitting sets against rather weak models of arithmetic circuits. Generalized matrix completion is the following problem: Given a matrix with affine linear forms as entries, find an assignment to the variables in the linear forms such that the rank of the resulting matrix is minimal. We call this rank the completion rank. Computing the completion rank is an NP-hard problem. As our first main result, we prove that it is also NP-hard to determine whether a given matrix can be approximated by matrices of completion rank b. The minimum quantity b for which this is possible is called border completion rank (similar to the border rank of tensors). Naturally, algebraic natural proofs can only prove lower bounds for such border complexity measures. Furthermore, these border complexity measures play an important role in the geometric complexity program. Using our hardness result above, we can prove the following barrier: We construct a small family of matrices with affine linear forms as entries and a bound b, such that at least one of these matrices does not have an algebraic natural proof of polynomial size against all matrices of border completion rank b, unless coNP \subset \exists BPP. This is an algebraic barrier result that is based on a well-established and widely believed conjecture. The complexity class BPP is known to be a subset of the more well known complexity class MA in the literature. Thus BPP can be replaced by MA in the statements of all our results. With similar techniques, we can also prove that tensor rank is hard to approximate. Furthermore, we prove a similar result for the variety of matrices with permanent zero. There are no algebraic polynomial size natural proofs for the variety of matrices with permanent zero, unless P^#P \subseteq BPP. On the other hand, we are able to prove that the geometric complexity theory approach initiated by Mulmuley and Sohoni (SIAM J. Comput. 31(2): 496--526, 2001) yields proofs of polynomial size for this variety, therefore overcoming the natural proofs barrier in this case.

### Wednesday, December 5th, 2018

The proper learning problem for a circuit class C is the following: given oracle access to a function f(x), find a minimal sized circuit T in C computing f. This problem, also known as the reconstruction problem for C, is known to be hard even for very simple classes of circuits in both the Boolean and Arithmetic settings. For arithmetic circuit classes C where we have some understanding in the form of lower bounds, can we exploit this understanding to design proper learning algorithms for C? We give such proper learning algorithms for some subclasses of arithmetic circuits. However, our algorithms are not worst case but they succeed provided that the unknown circuit T satisfies certain non-degeneracy conditions that are satisfied for example if T is not very large and the field constants in T are chosen randomly. Based on joint work with Chandan Saha.

I plan to discuss recent algorithms that beat exhaustive search for solving systems of low-degree polynomial equations over a small finite field, as well as hardness results.

Based on joint work with Brynmor Chapman, Daniel Lokshtanov, Mohan Paturi, Suguru Tamaki, and Huacheng Yu.

We show that any n-variate polynomial computable by a syntactically multilinear circuit of size poly(n) can be computed by a syntactically multilinear depth-4 circuit of size at most exp({O(\sqrt{nlog n}). For degree d = omega(n/log n), this improves upon the upper bound of exp({O(\sqrt{d}\log n)} obtained by Tavenas for general (non-multilinear) circuits, and is known to be asymptotically optimal in the exponent when d < n^{\epsilon} for a small enough constant epsilon. Our upper bound matches the lower bound of exp({Omega(\sqrt{nlog n})} proved by Raz and Yehudayoff, and thus cannot be improved further in the exponent.

Based on joint work with Rafael Oliveira and Ramprasad Saptharishi.

A homogeneous depth three circuit C computes a polynomial f = T_1 + T_2 + ... + T_s, where each T_i is a product of d linear forms in n variables. Given black-box access to f, can we efficiently reconstruct (i.e. proper learn) a homogeneous depth three circuit computing f? Learning homogeneous depth three circuits efficiently is stated as an open problem in a work by Klivans and Shpilka (COLT 2003). This is a highly interesting and challenging problem as poly(n,d,s) time learning for homogeneous depth three circuits implies sub-exponential time learning for general arithmetic circuits. We give a randomized poly(n,d,s) time algorithm to reconstruct non-degenerate homogeneous depth three circuits, for n > (3d)^2 and s < (n/3d)^{d/3}. The algorithm works over any field F, provided char(F) = 0 or greater than ds^2, and univariate polynomial factorization over F can be done efficiently. Loosely speaking, a circuit C is non-degenerate if the dimension of the partial derivative space of f equals the sum of the dimensions of the partial derivative spaces of the terms T_1,...,T_s. In this sense, the terms are ``independent'' of each other in a non-degenerate circuit. A random homogeneous depth three circuit (where the coefficients of the linear forms are chosen according to the uniform distribution or any other reasonable distribution) is almost surely non-degenerate. Previous learning algorithms for homogeneous depth three circuits are either improper (with an exponential dependence on d), or they work for constant s (with a doubly exponential dependence on s). Our algorithm hinges on a decomposition of the partial derivative space of f into irreducible invariant subspaces under the action of a shifted differential operator space. The decomposition then leads to the terms of C. To our knowledge, this is the first time the shifted partial derivative operator has been used to make progress on reconstruction algorithms. The technique may also find applications in reconstruction of other circuit classes. Based on joint work with Neeraj Kayal.

In this paper we study the complexity of constructing a hitting set for the closure of VP, the class of polynomials that can be infinitesimally approximated by polynomials that are computed by polynomial sized algebraic circuits, over the real or complex numbers. Specifically, we show that there is a PSPACE algorithm that given n,s,r in unary outputs a set of n-tuples over the rationals of size poly(n,s,r), with poly(n,s,r) bit complexity, that hits all n-variate polynomials of degree-r that are the limit of size-s algebraic circuits. Previously it was known that a random set of this size is a hitting set, but a construction that is certified to work was only known in EXPSPACE (or EXPH assuming the generalized Riemann hypothesis). As a corollary we get that a host of other algebraic problems such as Noether Normalization Lemma, can also be solved in PSPACE deterministically, where earlier only randomized algorithms and EXPSPACE algorithms (or EXPH assuming the generalized Riemann hypothesis) were known. The proof relies on the new notion of a robust hitting set which is a set of inputs such that any nonzero polynomial that can be computed by a polynomial size algebraic circuit, evaluates to a not too small value on at least one element of the set. Proving the existence of such a robust hitting set is the main technical difficulty in the proof. Our proof uses anti-concentration results for polynomials, basic tools from algebraic geometry and the existential theory of the reals. Joint work with Amir Shpilka.

Testing whether a set F of polynomials has an algebraic dependence is a basic problem with several applications. The polynomials are given as algebraic circuits. Algebraic independence testing question is wide open over finite fields (Dvir, Gabizon, Wigderson, FOCS'07). In this work we put the problem in AM & coAM. In particular, dependence testing is unlikely to be NP-hard. Our proof method is algebro-geometric-- estimating the size of the image/preimage of the polynomial map F over the finite field. A gap in this size is utilized in the AM protocols. Next, we introduce a new problem called *approximate* polynomials satisfiability (APS). We show that APS is NP-hard and, using projective algebraic-geometry ideas, we put APS in PSPACE (prior best was EXPSPACE via Grobner bases). This has many unexpected applications to approximative complexity theory. This solves an open problem posed in (Mulmuley, FOCS'12, J.AMS 2017); greatly mitigating the GCT Chasm (exponentially in terms of space complexity).

### Thursday, December 6th, 2018

Recently there has been a surge of activity on “scaling” algorithms. These include matrix scaling, operator scaling, uniform and non-uniform tensor scaling. These have a variety of applications in a number of diverse areas such as statistics, numerical linear algebra, non-commutative algebra, and derandomization, invariant theory, functional analysis and combinatorial geometry. These algorithms are typically from the alternating minimization (AM) class and the analysis uses tools from invariant theory and representation theory.

I plan to survey some of these scaling algorithms with a focus on the algebraic nature of their analysis.

In this talk, we describe two important algorithmic questions in Invariant Theory - the null-cone problem and the orbit closure intersection problem - and show how they arise as special cases of the PIT problem in a very general setting. We then survey special cases of these problems for specific group actions (simultaneous conjugation, left-right action) and state what is known about them, as well as many questions which are still open.

In this paper we study the identity testing problem of arithmetic read-once formulas (ROF) and some related models. A read-once formula is formula (a circuit whose underlying graph is a tree) in which the operations are {+, ×} and such that every input variable labels at most one leaf. We obtain the ﬁrst polynomial-time deterministic identity testing algorithm that operates in the black-box setting for read-once formulas, as well as some other related models. As an application, we obtain the ﬁrst polynomial-time deterministic reconstruction algorithm for such formulas. Our results are obtained by improving and extending the analysis of the algorithm of Shpilka-Volkovich 09’.

Joint work with Daniel Minahan.

This talk will be inspired by the recently noticed connection [0] between formal (tree) series and non-commutative computations, as a generalization of Nisan's characterization [1] and subsequent results. This provides a clearer understanding of non-commutative computations, possible research directions for general non-commutative circuits and even commutative computations. [0] Nathanaël Fijalkow, Guillaume Lagarde, Pierre Ohlmann: Tight Bounds using Hankel Matrix for Arithmetic Circuits with Unique Parse Trees. Electronic Colloquium on Computational Complexity (ECCC) 25: 38 (2018)

In this talk we discuss rank methods lower bounds for arithmetic circuits, which were long recognized as encompassing and abstracting almost all known arithmetic lower bounds to-date, including the most recent impressive successes. Rank methods are also in wide use in algebraic geometry for proving tensor rank and symmetric tensor rank lower bounds. We will show barriers to these methods. In particular, 1) Rank methods cannot prove better than \Omega_d (n^{\lfloor d/2 \rfloor}) lower bound on the tensor rank of any d-dimensional tensor of side n. (In particular, they cannot prove super-linear, indeed even >6n tensor rank lower bounds for {\em any} 3-dimensional tensors.) 2) Rank methods cannot prove $\Omega_d (n^{\lfloor d/2 \rfloor})$ on the Waring rank of any n-variate polynomial of degree d. This limitations also suggest new ideas how to look for a new rank methods that could be useful for other models.

Consider a matrix whose entries are polynomials in indeterminates x1,...,xm. On the one hand, we can substitute random values for the indeterminates x1,...,xm and compute the (generic) rank of the resulting matrix. On the other hand, we can compute its (symbolic) rank as a matrix with entries in the function field. Since rank is captured by vanishing of minors, a simple argument tells us that both computations yield the same answer. Further, elements in the function field can be expanded in terms of power series. The considerations above are crucial steps for the barriers for lower bounds (say for tensor rank) obtainable by reduction to matrix rank in the work of Efremenko, Garg, Oliveira and Wigderson.

In ongoing work with Garg, Oliveira and Wigderson, we are investigating (among other similar things) the possibility of obtaining lower bounds for higher order tensors by reduction to lower order tensors. In proving barriers for these, one needs analogous results for tensors whose entries are polynomials similar to the aforementioned case of matrices. The proof in the matrices case breaks down because tensor rank is not captured by vanishing of polynomials. But the statement remains true, indeed in greater generality. However, its proof now seems to require concepts and results from algebraic geometry, which I will explain in the talk. We hope of the general result will find even more applications in complexity theory.

No special background is assumed."

The classical lemma of Ore-DeMillo-Lipton-Schwartz-Zippel states that any nonzero polynomial f(x1,...,xn) of degree at most s will evaluate to a nonzero value at some point on a grid S^n of in F^n with |S| > s . Thus, there is an explicit hitting set for all n-variate degree s, size s algebraic circuits of size (s+1)^n. We prove the following results: - Let eps > 0 be a constant. For a sufficiently large constant n and all s \geq n, if we have an explicit hitting set of size (s+1)^{n? eps} for the class of n-variate degree s polynomials that are computable by algebraic circuits of size s, then for all s, we have an explicit hitting set of size s^{exp exp (O(log*s)) for s-variate circuits of degree s and size s. That is, if we can obtain a barely non-trivial exponent compared to the trivial (s+1)^n sized hitting set even for constant variate circuits, we can get an almost complete derandomization of PIT. - The above result holds when "circuits" are replaced by "formulas" or "algebraic branching programs". This extends a recent surprising result of Agrawal, Ghosh and Saxena (STOC 2018) who proved the same conclusion for the class of algebraic circuits, if the hypothesis provided a hitting set of size at most (s^{n^{0.5 - eps}}) (where eps > 0 is any constant). Hence, our work significantly weakens the hypothesis of Agrawal, Ghosh and Saxena to only require a slightly non-trivial saving over the trivial hitting set, and also presents the first such result for algebraic branching programs and formulas. This is joint work with Mrinal Kumar and Anamay Tengse.

### Friday, December 7th, 2018

The complexity of Iterated Matrix Multiplication is a central theme in Computational Complexity theory, as the problem is closely related to the problem of separating various complexity classes within P. In this talk, we will discuss the algebraic formula complexity of multiplying d many 2?x2 matrices and show that the well-known divide-and-conquer algorithm cannot be significantly improved at any depth, as long as the formulas are multilinear. We will also give two applications of this lower bound to multilinear formula complexity. This is joint work with Suryajith Chillara (IITB) and Srikanth Srinivasan (IITB).

Let P be a homogeneous polynomial of degree N in m variables. We assume that the polynomial is given as an exact evaluation oracle. Let S be a nontrivial subset of variables, S' is its compliment. Variables are S-separated if P(x1,..,xm) = Q(xi, i \in S) R(xj, j \in S'); monomials are S-splitted if deg(S) + deg(S') = N. In the homogeneous case S-separation implies S-splitting. Our main technical result states that if P is strongly log-concave on an open set then S-splitting implies S-separation. We show that deciding if there exists nontrivial S such that P is S-separable is in BPP, whereas S-splitting is NP-HARD in general. Thus our result gives poly-time randomized algorithm for S-splitting in strongly log-concave case. In the case of H-stable polynomials we present a poly-time deterministic algorithm.

The group of diagonal matrices acts on the Grassmannian of k-planes of an n-dimensional vector space over the complex numbers. The problem of understanding the GIT quotient of this action is related to the classical problem of understanding the GIT-quotient of n points spanning projective k-space (under the action of the projective linear group). When k=2 this problem is well understood, but not much is known for other k. It is known that there is a minimal Schubert variety in the Grassmannian admitting semistable points (with respect to the Plucker embedding).

We undertake a study of the GIT quotients of interesting subvarieties in the Grassmannian which admit semistable points and prove some general theorems. In the particular case k=3 and n=7, we give a complete description of the GIT quotient and show some interesting geometric properties of the GIT quotient.

Although there are no immediate complexity theoretic connections to this work, an understanding of GIT quotients is nevertheless important from the perspective of GCT. We will assume no familiarity with GIT but we hope to motivate the general question, the vocabulary, and the combinatorics involved using torus quotients of the Grassmannian as our motivating example.We will give a brief introduction to GIT quotients and discuss some recent work with Kannan and Bakshi on torus quotients of certain Schubert varieties in the Grassmannian. An understanding of GIT quotients is essential to the GCT programme - however understanding torus quotients in the Grassmannian is already a non-trivial problem. While there are no complexity theoretic results in this work I hope to convey the rich combinatorics that comes into play. I will assume very little background and try to convey and motivate some important definitions.

We present an efficient procedure for computing a (scaled) Hadamard product for commutative polynomials. This serves as a tool to obtain faster algorithms for several problems. Our main results are: 1) Given an arithmetic circuit C over rationals or finite fields, and a parameter k, we give a deterministic algorithm of run time n^(k/2+clogk) for some constant c to compute the sum of the coefficients of multilinear monomials of degree k in f. 2) Given an arithmetic circuit C over rationals or finite fields, and a parameter k, we give a randomized algorithm of run time O^*(4.32^k) to check if f contains a multilinear monomial of degree k. Our algorithm uses polynomial space. This is joint work with Abhranil Chatterjee, Raji Datta, and Partha Mukhopadhyay.