Abstract

A common way to model data in learning applications is to view it as arising from a "mixture model" with a small number of parameters. Finding the parameters often leads to a better understanding of the data. I will describe a recent approach, which uses the data to compute a (typically low rank) tensor whose decomposition encodes the parameters that we wish to compute. Thus we can appeal to the uniqueness results for tensor decomposition to say that we can recover the parameters from the data. The question we care about is the number of samples we need to obtain a good approximation to the decomposition.

To this end, I will present a robust version of the celebrated Kruskal's theorem on the uniqueness of tensor decomposition: given a tensor whose decomposition satisfies a robust form of Kruskal's rank condition, it is possible to approximately recover the decomposition if the tensor is known up to a sufficiently small (but inverse polynomial) error.

This immediately implies the identifiability of parameters in many mixture models using a polynomial number of samples. Finally, I will briefly discuss other applications and the question of efficient decomposition algorithms.

(joint work with Moses Charikar and Aravindan Vijayaraghavan)

Video Recording