Abstract

This lecture introduces *matrix multiplication* as a unifying problem in both arithmetic and communication complexity, highlighting why its study is central to the theory and practice of efficient linear algebra.

We begin with the classical cubic-time algorithm, and motivate the study of arithmetic complexity for bilinear problems. We then discuss Strassen’s fast algorithm, which reduces the multiplication of $2 \times 2$ block matrices from 8 recursive multiplications to 7. By unfolding the recursion, we obtain Strassen’s bound of $O(n^{\log_2 7}) \approx O(n^{2.81})$, and introduce the notion of the matrix multiplication exponent $\omega$. We discuss why determining $\omega$ is an open problem of central importance and a driving force for algorithmic innovation.

The significance of matrix multiplication extends beyond the operation itself: through classical complexity reductions, problems such as matrix inversion, determinant computation, rank determination, and solving linear systems can all be solved in essentially the same asymptotic time as matrix multiplication.

Arithmetic complexity, however, does not capture the full cost of practical algorithms. We therefore also consider communication complexity: in the sequential model, where the bottleneck is data movement between fast and slow memory, and in the parallel model, where processors must exchange partial results. Known lower bounds show that matrix multiplication is communication-intensive, and that any asymptotically optimal algorithm must also optimize data movement, not just arithmetic operations.

Video Recording