Random matrices have come to play a significant role in computational mathematics and statistics. Established methods from random matrix theory have led to striking advances in these areas, but ongoing research has generated difficult questions that cannot be addressed without new tools. The purpose of this tutorial is to introduce some recent techniques, collectively called matrix concentration inequalities, that can simplify the study of many types of random matrices. These results parallel classical tail bounds for scalar random variables, such as the Bernstein inequality, but they apply directly to matrices. In particular, matrix concentration inequalities can be used to control the spectral norm of a sum of independent random matrices by harnessing basic properties of the summands. Many variants and extensions are now available, and the outlines of a larger theory are starting to emerge. These new techniques have already led to advances in many areas, including partial covariance estimation, randomized schemes for low-rank matrix decomposition, relaxation and rounding methods for combinatorial optimization, construction of maps for dimensionality reduction, techniques for subsampling large matrices, analysis of sparse approximation algorithms, and many others.
J. A. Tropp, "User-Friendly Tools for Random Matrices: An Introduction", 2012. Available at users.cms.caltech.edu/~jtropp/notes/Tro12-User-Friendly-Tools-NIPS.pdf
J. A. Tropp, "User-Friendly Tail Bounds for Sums of Random Matrices", FOCM 2012. Available at users.cms.caltech.edu/~jtropp/papers/Tro12-User-Friendly-FOCM.pdf
L. Mackey et al., "Matrix concentration inequalities via the method of exchangeable pairs," 2012. Available at users.cms.caltech.edu/~jtropp/papers/MJCFT12-Matrix-Concentration.pdf