Abstract

Matrix multiplication is an essential computational algorithm used in scientific computation, AI and many other fields. Despite over five decades of research on sub-cubic time algorithms, most current math libraries and state-of-the-art hardware accelerators still use the cubic-time classic algorithm for matrix multiplication. This means that scientific libraries and industry standards rely on a suboptimal solution for both performance and power consumption. Why is that?

One reason for this failure is that many of the sub-cubic algorithms are applicable only to matrices of enormous dimension, and have large hidden constants in arithmetic complexity. This makes them impractical. Other challenges include communication costs, numerical stability issues, and poor hardware-software fit.

In this talk I will present a brief history of the ongoing race for matrix multiplication algorithms that are faster in practice.

Video Recording