Abstract

The analysis of tensor data, i.e., arrays with multiple directions, has been an active research topic in data science. Datasets in the form of tensors arise from a wide range of scientific applications. Tensor methods also provide unique perspectives to many high-dimensional problems, where the observations are not necessarily tensors. On the other hand, many tensor problems exhibit intrinsic computational barriers and statistical-computational gaps. In this talk, we discuss several recent advances in the statistical-computational trade-offs in various tensor learning problems, such as tensor PCA, tensor clustering, and tensor regression. Our theory demonstrates a "blessing of statistical-computational gap" phenomenon: one usually requires stronger conditions than the statistical (or information-theoretic) limit to ensure the computationally feasible estimation is achievable in these tensor problems. Such conditions “incidentally” render a feasible low-rank tensor inference without debiasing and overparametrized estimation without the cost of extra samples.