Abstract

Abstract: We consider relative error low rank approximation of tensors with respect to the Frobenius norm: given an order-q tensor A, output a rank-k tensor B for which |A-B|_F <= (1+eps)OPT, where OPT =inf_{rank-k A'} |A-A'|_F. Despite the success on obtaining relative error low rank approximations for matrices, no such results were known for tensors. One structural issue is there may be no rank-k tensor A' achieving the above infinum. Another, computational issue, is that an efficient relative error low rank approximation algorithm for tensors would allow one to compute the tensor rank, which is NP-hard. We bypass these issues via (1) bicriteria and (2) parameterized complexity solutions. As an example, we give an algorithm which outputs a rank (k/eps)^{q-1} tensor B for which |A-B|_F <= (1+eps)OPT in nnz(A)+n*poly(k/eps) time in the real RAM model for an n x n x ... n tensor. Here nnz(A) is the number of non-zero entries in A. For outputting a rank-k tensor, or even a bicriteria solution with rank-Ck for a certain constant C>1, we show an exp(k^{1-o(1)}) time lower bound under the Exponential Time Hypothesis. Our results also give the first relative error low rank approximations for tensors for a large number of robust error measures for which nothing was known, as well as column row and tube subset selection problems, generalizing and strengthening results for CUR decompositions of matrices. Based on joint with Zhao Song (UT Austin) and Peilin Zhong (Columbia).

Video Recording