Abstract
In contrast to the case of dense matrices, when solving large-scale eigenvalue problems one only seeks to compute only a small subset of the spectrum that is relevant to the application at hand (e.g., those largest or smallest in magnitude, or rightmost in the complex plane). Algorithms differ in the way they steer the computation toward those desired eigenvalues. Our review will begin with the ubiquitous power method and its block generalization, subspace iteration. We will then move to the Lanczos and Arnoldi methods and their restarted variants. We will explain why eigenvalues of large symmetric matrices can usually be computed quite reliably, while the convergence theory for the nonsymmetric case remains stubbornly incomplete.