Motivated by the many potential applications of low-rank multi-way tensor approximations, we set out to count the real-valued rank-one tensors that are critical points of the distance function to a general tensor v. As this count depends on v, we average over v drawn from a multivariate Gaussian distribution, and find formulas that relate this average to problems in random matrix theory. This count is a specific instance of (average) Euclidean distance degrees of algebraic varieties, and I will briefly discuss other instances inspired by applications. The talk is based on joint work with Emil Horobet (arxiv:1408.3507) and with Horobet-Ottaviani-Sturmfels-Thomas (arxiv:1309.0049) and with Jasmijn Baaijens (arxiv:1405.0422).

### Monday, November 10th, 2014

The identifiability problem for tensors consist in determining which tensors have a unique minimal decomposition as a sum of tensors of rank 1 (up to scaling and reordering). It turns out that the problem, both for general tensors and for symmetric tensors, can be also attached with methods of projective Geometry, related with the study of tangent spaces to secant varieties. We will review briefly the geometric background and outline the construction of recent algorithms to detect the identifiability, together with a sketch of achievements in the topic, based on the geometric setting.

We will present elementary bounds on maximal rank in terms of the generic and minimal typical ranks. We will also present an elementary proof that every rank between the minimal typical rank and maximal typical rank is also typical. All of the results hold for X-rank with respect to a projective variety X.

Equivariant local systems on orbits of a group action give rise via the Riemann-Hilbert correspondence to equivariant D-modules, which are typically hard to describe. I will explain how to compute explicitly the characters of the GL-equivariant D-modules supported on the Veronese cones. In particular, I will appeal to recent results of de Cataldo, Migliorini and Mustaţă on the Decomposition Theorem for toric maps, and show how representation theoretic stabilization results are used in a crucial way in the calculation. As an application, I'll show that the local cohomology with support on a Veronese cone is a simple equivariant D-module.

What do Siegel's theorem on finiteness of integer solutions have to do with complexity theory? In this talk we discuss a complexity dichotomy theorem for counting problems, which are expressible as certain Sum-of-Product computations. The space of problems are parametrized by local constraint functions, which are expressible as tensors. A dichotomy is a classification of a class of problems into exactly two kinds: those that are polynomial time computable, and those that are #P-hard, and thus intractable. An example problem in this dichotomy is the problem of counting the number of valid edge colorings of a graph. We will show that an effective statement of Siegel's theorem with respect to some specific polynomials and some Galois theory are key ingredients in the proof of this dichotomy. Along the way we will also meet the Tutte polynomial, medial graphs, Eulerian orientations and Eulerian partitions, Puiseux series, and a certain lattice condition on the (logarithm of) the roots of polynomials with integer coefficients.

Joint work with Heng Guo and Tyson Williams.

### Tuesday, November 11th, 2014

The nuclear norms for tensors comes up naturally in several areas of mathematical applications of tensors, as in quantum information theory and tensor completion of missing entries of tensors. As expected, the computation of this norm is NP-hard. We give an outline of the proof of this statement and discuss some results related to the computation of nuclear norm for tensors.

This is joint work with Lek-Heng Lim.

This talk will present a brief survey on an approach for designing matrix multiplication algorithms initiated by Coppersmith and Winograd which lead to the current best upper bound on the matrix multiplication exponent.

Multi-particle entanglement is an essential resource for a variety of quantum information processing tasks. Yet, despite an enormous amount of literature dedicated to its study, our current understanding of it is still in its infancy. In this talk I will introduce a systematic classification of multiparticle entanglement in terms of equivalence classes of states under stochastic local operations and classical communication (SLOCC). I will show that such an SLOCC equivalency class of states is characterized by ratios of homogenous polynomials that are invariant under local action of the special linear group. I will then introduce a complete construction of the set of all such SL-invariant polynomials (SLIPs). The construction is based on Schur-Weyl duality and applies to any number of qudits in all (finite) dimensions. In addition, I will introduce an elegant formula for the dimension of the homogenous SLIPs space of a fixed degree as a function of the number of qudits. The expressions for the SLIPs involve in general many terms, but for the case of qubits can be written in a much simpler form.

I will report on a recent project [Science 340, 6137] that combines methods from algebraic geometry with a problem that was motivated by theoretical physics, but is at its heart purely tensorial. Indeed, consider a tensor with n indices. There are n ways of associating a matrix with the tensor, by selecting one index and grouping the (n-1) remaining ones together. Each of these matrices comes with a list of singular values. The question now is: What sets of singular values can occur this way and how do they relate to more global properties of the tensor? The problem has applications in quantum mechanics, where every index is associated with a particle and the singular values measure the "degree of uncertainty" ascribed to it. We identify the compatible sets of singular values as a non-abelian moment polytope—an object that is studied in symplectic geometry, asymptotic representation theory, and algebraic geometry. These methods allow us to relate the spectra to the SL-orbit in which the tensor lies. For low-dimensional situations, we can compute the polytopes explicitly and apply them to physical problems.

I present a brief overview about how tensors are utilized in quantum information theory, in order to characterize quantum correlations (or entanglement) of quantum many-body systems. Examples include hyperdeterminants and tensor network states. The complexity of such tensorial descriptions is naturally connected to an important question about simulatability of quantum computation and quantum simulation by classical methods.

### Wednesday, November 12th, 2014

The best lower bounds for the complexity (more precisely rank and border rank) of matrix multiplication come from geometry. I will describe work in progress for new ways to use geometry to derive explicit algorithms and to prove upper bounds for the complexity of matrix multiplication.

We begin by describing the ideas behind the state-of-the-art bounds on omega, the exponent of matrix multiplication.

We then present the "group-theoretic" approach of Cohn and Umans as an alternative to these methods, and we generalize this approach from group algebras to general algebras. We identify adjacency algebras of coherent configurations as a promising family of algebras in the generalized framework. We prove a closure property involving symmetric powers of adjacency algebras, which enables us to prove nontrivial bounds on omega using commutative coherent configurations, and suggests that commutative coherent configurations may be sufficient to prove omega = 2.

Along the way, we introduce a relaxation of the notion of tensor rank, called s-rank, and show that upper bounds on the s-rank of the matrix multiplication tensor imply upper bounds on the ordinary rank. In particular, if the "s-rank exponent of matrix multiplication" equals 2, then the (ordinary) exponent of matrix multiplication, omega, equals 2.

Finally, we will mention connections between several conjectures implying omega=2, and variants of the classical sunflower conjecture of Erdos and Rado.

No special background is assumed.

Based on joint works with Noga Alon, Henry Cohn, Bobby Kleinberg, Amir Shpilka, and Balazs Szegedy.

This work presents a method to analyze the powers of a given trilinear form (or tensor) and obtain upper bounds on the asymptotic complexity of matrix multiplication. Compared with existing approaches, this method is based on convex optimization, and thus has polynomial-time complexity. As an application, we use this method to study powers of the construction given by Coppersmith and Winograd [Journal of Symbolic Computation, 1990] and obtain the upper bound w<2.3728639 on the exponent of square matrix multiplication, which slightly improves the best known upper bound.

Strassen's conjecture states that computational complexity is additive when the variables are independent. For example, there is no more efficient way to multiply two pairs of matrices than multiply each pair separately.

Strassen's conjecture is now stated in more general terms, using tensors and the notion of tensor rank and it is still open since the 70s.

In this talk, we will discuss some recent progress on the study of Strassen's conjecture. In particular, we will focus on the case of symmetric tensors which correspond to homogeneous polynomials.

In 1973 Strassen conjectured that additivity holds for bilinear maps, namely that the computational complexity of simultaneously executing two bilinear maps is the same as the sum of the complexities of the individual bilinear maps. In complexity theory, of particular interest is the matrix multiplication.

I will rephrase Strassen's conjecture in terms of tensor rank and show that it holds for three-factor tensors, whenever the dimension of one of the involved vector spaces is at most two.

This is joint work in progress with J. Buczynski.

### Thursday, November 13th, 2014

We will compare the four notions of ranks of tensors, polynomials and similar geometric structures. We will concentrate on the border rank and smoothable rank, and show examples when they differ. The classification of tensors of border rank three will be presented, with an emphasis of a notion of "exceptional limit" showing how it serves to provide examples of patological behaviour.

The talk is based on a joint works with Joseph Landsberg and with Weronika Buczynska.

The rank of a symmetric form is the length of its shortest decomposition as a sum of pure powers of linear forms, i.e. the shortest smooth apolar scheme.

The cactus rank of the form is the the length of the shortest apolar scheme.

The a-th cactus variety of cubic forms C_{a,n} is the closure of the family of cubic forms of cactus rank a in the projective space of cubic forms in n+1 variables.

Together with Ranestad I shall report on recent work with Jelisiejew and Marques giving the dimension and a geometric characterization of the general member of C_{a,n}, when 1\leq a\leq 2n+2.

A symmetric tensor is orthogonally decomposable (or odeco) if it can be written as a linear combination of symmetric powers of n orthonormal vectors in R^n. We study the properties of odeco tensors. We give a formula for all of the eigenvectors of an odeco tensor. We also give equations which we believe cut out the variety of odeco tensors.

The tensor decomposition problem can be seen as a special instance of the reconstruction problem of sum of exponential functions from truncated moment sequences. This sparse representation problem appears in many applications.

We recall an approach due to Prony for solving the problem in the univariate case, and describe an extension to the multivariate case. The method is illustrated on several examples coming from signal processing, magnetic resonance imaging, sparse interpolation, polynomial optimization, When few moments are known, we would like also to be able to find a sparse decomposition.

This problem corresponds to a flat extension property, which reduces either to a rank completion problem on structured matrices or to solving commutation constraints.

We analyze some of its geometric properties, describe some techniques to solve it and present some applications.

Symmetric tensors are multi-indexed arrays whose entries are invariant with respect to permutations of multi-indices. Generating polynomials are linear relations of recursive patterns about tensor entries. A set of all generating polynomials can be represented by a matrix, which is called a generating matrix. Generally, a symmetric tensor decomposition can be uniquely determined by a generating matrix. We characterize the sets of such generating matrices and investigate their properties (e.g., the existence, dimensions, nondefectiveness). Using these properties, we propose computational methods for symmetric tensor decompositions. Extensive examples are shown to demonstrate their efficiency.

### Friday, November 14th, 2014

No abstract available.

Learning mixture of Gaussians has been a classical problem in machine learning and theoretical computer science. Many different approaches were developed, including a line of work starting from [Dasgupta 99] that works for well-separated Gaussians, and an algorithm that does not depend on separation by Moitra and Valiant. However the running time of the latter is exponential in the number of Gaussians. Recently, [Hsu Kakade 13] developed an algorithm based on method of moments that can learn k spherical Gaussians in d dimensions when k<=d, without requiring any separation.

In this talk we show it is possible to learn k Gaussians with general covariance matrices, when the dimension d is at least k^2. This involves new observations on the structure of moment tensors for general Gaussians, and new ways of doing smoothed analysis for the condition number of structured matrices.

A common way to model data in learning applications is to view it as arising from a "mixture model" with a small number of parameters. Finding the parameters often leads to a better understanding of the data. I will describe a recent approach, which uses the data to compute a (typically low rank) tensor whose decomposition encodes the parameters that we wish to compute. Thus we can appeal to the uniqueness results for tensor decomposition to say that we can recover the parameters from the data. The question we care about is the number of samples we need to obtain a good approximation to the decomposition.

To this end, I will present a robust version of the celebrated Kruskal's theorem on the uniqueness of tensor decomposition: given a tensor whose decomposition satisfies a robust form of Kruskal's rank condition, it is possible to approximately recover the decomposition if the tensor is known up to a sufficiently small (but inverse polynomial) error.

This immediately implies the identifiability of parameters in many mixture models using a polynomial number of samples. Finally, I will briefly discuss other applications and the question of efficient decomposition algorithms.

(joint work with Moses Charikar and Aravindan Vijayaraghavan)

Low-rank tensor decompositions are a powerful tool that arise in machine learning, statistics and signal processing. However, tensors pose significant algorithmic challenges and tensors analogs of much of the matrix algebra toolkit are unlikely to exist because of hardness results. For instance, efficient tensor decompositions in the overcomplete case (where rank exceeds dimension) are particularly challenging.

I will introduce a smoothed analysis model for studying tensor decompositions. Smoothed analysis gives an appealing way to analyze non worst-case instances: the assumption here is that the model is not adversarially chosen, formalized by a perturbation of model parameters. We give efficient algorithms for decomposing tensors, even in the highly overcomplete case (rank being polynomial in the dimension) using smoothed analysis. This gives new polynomial time algorithms for learning probabilistic models like mixtures of axis-aligned gaussians and multi-view models even in highly overcomplete settings.

Based on joint work with Aditya Bhaskara, Moses Charikar and Ankur Moitra.

I will present a generalization of singular values that can be applied to various situations, such as tensor decompositions, compressed sensing and signal denoising.