Abstract

In this talk I will discuss faster algorithms and improved sample complexities for the fundamental problem of estimating the top eigenvector of a matrix A^T A.  In particular I will show how the classic method of shift-and-invert preconditioning efficiently reduces the top eigenvector problem to the problem of solving a sequence of linear systems. Moreover, by leveraging recent sampling-based linear system solvers I will show how this allows us to obtain improved algorithms for both the offline eigenvector estimation problem where A is given explicitly as well as the online eigenvector estimation problem where we only receive independent samples from A^T A. 

For the offline eigenvector problem, I will show how to compute an ϵ-approximate top eigenvector in time Õ([nnz(A) + d*sr(A)/gap^2]*log 1/ϵ) and Õ([nnz(A)^{3/4} (d*sr(A))^{1/4} / √gap ]*log 1/ϵ). Here sr(A) is the stable rank, gap is the multiplicative eigengap, and Õ hides log factors in d and gap. This result improves upon the running time of both classic approaches as well as recent work on subspaces embeddings and stochastic optimization in various parameter regimes 
 
For the online eigenvector problem, I will show how to refine a gap-approximate solution to an ϵ-approximation using O(v log log (1/ϵ) /gap^2 + v /(gap * ϵ)) samples.  Here v is a natural notion of variance of the distribution. Combining with previous work to compute gap-approximate solutions this result improves upon sample complexity and runtime results under a variety of assumptions on D.
 
This talk reflects joint work with Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, and Praneeth Netrapalli.