Ming Gu (UC Berkeley)
Low-rank matrix approximations have become a technique of central importance in large scale data science. In this talk, we discuss a set of novel low-rank matrix approximation algorithms that are tailored for all levels of accuracy requirements for maximum computational efficiency. These algorithms include spectrum-revealing matrix factorizations that are optimal up to dimension-dependent constants, and an efficient truncated SVD (singular value decomposition) that is accurate up to a given tolerance. We provide theoretical error bounds and numerical evidence that demonstrate the superiority of our algorithms over existing ones, and show their usefulness in a number of data science applications.