Results 1271 - 1280 of 23832
This lecture introduces *matrix multiplication* as a unifying problem in both arithmetic and communication complexity, highlighting why its study is central to the theory and practice of efficient linear algebra.
We begin with the classical cubic-time algorithm, and motivate the study of arithmetic complexity for bilinear problems. We then discuss Strassen’s fast algorithm, which reduces the multiplication of $2 \times 2$ block matrices from 8 recursive multiplications to 7. By unfolding the recursion, we obtain Strassen’s bound of $O(n^{\log_2 7}) \approx O(n^{2.81})$, and introduce the notion of the matrix multiplication exponent $\omega$. We discuss why determining $\omega$ is an open problem of central importance and a driving force for algorithmic innovation.
The significance of matrix multiplication extends beyond the operation itself: through classical complexity reductions, problems such as matrix inversion, determinant computation, rank determination, and solving linear systems can all be solved in essentially the same asymptotic time as matrix multiplication.
Arithmetic complexity, however, does not capture the full cost of practical algorithms. We therefore also consider communication complexity: in the sequential model, where the bottleneck is data movement between fast and slow memory, and in the parallel model, where processors must exchange partial results. Known lower bounds show that matrix multiplication is communication-intensive, and that any asymptotically optimal algorithm must also optimize data movement, not just arithmetic operations.
I will begin with a gentle introduction to tensors and how they arise in combinatorics, signal processing, geometry, and most importantly complexity theory. I will then focus on aspects of the geometry of tensors most relevant for the study of the complexity of matrix multiplication.
What does it mean to *compute* something, and how should we measure the cost of doing so? This introductory lecture surveys foundational models of computation that are especially relevant when studying complexity questions in linear algebra. We begin with models of arithmetic: exact arithmetic over real or integer rings, and floating-point arithmetic as it arises in practice. These provide a natural entry point to the analysis of numerical error, where we introduce forward and backward error bounds as two complementary ways of quantifying stability.
We then turn to models of *complexity*. Arithmetic complexity focuses on counting operations in an idealized exact-arithmetic world, while bit complexity refines this picture by accounting for the representation size of numbers. Beyond operations on numbers, communication itself is often the true bottleneck. We will discuss models of communication complexity: first, in the sequential setting, where data must be moved between fast and slow levels of memory, and then in the parallel setting, where multiple processors exchange information.
By contrasting these models, we will see how the cost of computation can depend as much on information movement as on arithmetic itself. This overview should prepare participants to engage with the deeper themes of the program, where complexity theory and linear algebra meet.
I will begin with a gentle introduction to tensors and how they arise in combinatorics, signal processing, geometry, and most importantly complexity theory. I will then focus on aspects of the geometry of tensors most relevant for the study of the complexity of matrix multiplication.
The boot camp is intended to acquaint program participants with the key themes of the program. It will consist of five days of tutorial presentations. Boot camp exercises: https://www.dropbox.com/scl/fi/jkk2xm18isp5dqc8u91dq/CLA_Bootcamp_Exerc…...