Results 1261 - 1270 of 23832
This short course offers a new perspective on randomized algorithms for matrix computations. It explores the distinct ways in which probability can be used to design algorithms for numerical linear algebra. Each design template is illustrated by its application to several computational problems. This treatment establishes conceptual foundations for randomized numerical linear algebra, and it forges links between algorithms that may initially seem unrelated. Lecture notes: arXiv 2402.17873, with Anastasia Kireeva (ETH).
This short course offers a new perspective on randomized algorithms for matrix computations. It explores the distinct ways in which probability can be used to design algorithms for numerical linear algebra. Each design template is illustrated by its application to several computational problems. This treatment establishes conceptual foundations for randomized numerical linear algebra, and it forges links between algorithms that may initially seem unrelated. Lecture notes: arXiv 2402.17873, with Anastasia Kireeva (ETH).
In contrast to the case of dense matrices, when solving large-scale eigenvalue problems one only seeks to compute only a small subset of the spectrum that is relevant to the application at hand (e.g., those largest or smallest in magnitude, or rightmost in the complex plane). Algorithms differ in the way they steer the computation toward those desired eigenvalues. Our review will begin with the ubiquitous power method and its block generalization, subspace iteration. We will then move to the Lanczos and Arnoldi methods and their restarted variants. We will explain why eigenvalues of large symmetric matrices can usually be computed quite reliably, while the convergence theory for the nonsymmetric case remains stubbornly incomplete.
I will define the problem of approximately diagonalizing a given dense matrix. I will explain two phenomena which impede the convergence of diagonalization algorithms and complicate their analysis: small eigenvalue gaps, and non-orthogonal eigenvectors (i.e., nonnormality). Finally, I will explain how random perturbations can be used to surmount these difficulties and survey what is known and remains to be discovered in this area.
We explain the different concepts and tools underlying the laser method, as invented by Strassen and Coppersmith-Winograd in 1987, for proving upper bounds on the exponent of matrix multiplication. Keywords are: tensor rank and restriction, border rank and degeneration, asymptotic sum inequality, tight sets. Details can be found in Chapter 15 of the book algebraic complexity.
We explain the different concepts and tools underlying the laser method, as invented by Strassen and Coppersmith-Winograd in 1987, for proving upper bounds on the exponent of matrix multiplication. Keywords are: tensor rank and restriction, border rank and degeneration, asymptotic sum inequality, tight sets. Details can be found in Chapter 15 of the book algebraic complexity.
This talk will cover recent research on faster algorithms for solving linear systems. I will discuss how randomization can be used to accelerate linear system solving, both via preconditioning and via stochastic iterative methods like the randomized Kaczmarz method. We will also discuss recent progress on randomized solvers for ultra-sparse linear systems. Finally, I will briefly touch on the active and broad area of research on solvers for structured linear systems arising in the computational sciences and data sciences. No prior knowledge will be assumed beyond topics covered in Mark Embree's prior introductory lecture.
Linear systems of algebraic equations lead to the most fundamental problems in computational linear algebra. Such "Ax = b" systems arise throughout science, engineering, and data applications, and as building blocks in algorithms for more sophisticated problems. In this bootcamp talk, we will provide an overview of linear systems and their basic properties (sensitivity of x to changes in A and b), then describe classical methods for their solution: matrix-factorization algorithms based on Gaussian elimination, expedited variants that exploit sparsity or other structure, and iterative methods suitable for large-scale systems. The convergence of these iterative methods can depend on subtle ways on properties of the matrix A (eigenvalue distribution, nonnormality), which we will briefly describe.