Results 1061 - 1070 of 23799
As part of the Algorithmic Foundations for Emerging Computing Technologies Boot Camp, David Patterson (UC Berkeley) reviews the drivers of computer architecture (Moore’s law, Dennard scaling, domain-specific architectures, the roofline performance model) and upcoming critical challenges (deceleration of memory bandwidth and capacity, power, carbon footprint) and opportunities (chiplets, high-bandwidth memory, high-bandwidth flash).
Recall November 6, 2024 — the day after the U.S. election. I was driving back to my home in Washington, DC, from Ohio with colleagues. I was heartbroken not because of the rebuke to my political party, but because of the accompanying rebuke to scientists and expertise in government. Just hours earlier, I had imagined a very different future. As an AI researcher and policymaker, I had dreamed about landing my AI policy priorities in legislation.
Warm greetings from Berkeley, where our Fall 2025 research programs on Complexity and Linear Algebra, and on Algorithmic Foundations for Emerging Computing Technologies, are in full swing. There is a seminar talk or reading group meeting pretty much every day, and the two programs are also discovering interesting synergies and exploring holding a joint seminar series.
In his presentation in the Complexity and Linear Algebra Boot Camp, Senior Scientist Nikhil Srivastava defines the problem of approximately diagonalizing a given dense matrix, and explains two phenomena that impede the convergence of diagonalization algorithms and complicate their analysis.
In this episode of Polylogues, Science Communicator in Residence Lakshmi Chandrasekaran sits down with two of the senior participants in our Summer 2025 Cryptography program, Yael Tauman Kalai (MIT) and Daniele Micciancio (UC San Diego).
This talk explores fast algorithms for entrywise approximation of matrix inverses and solutions to linear systems in diagonally dominant matrices—objects closely connected to random walk quantities such as hitting and escape probabilities in graphs. These quantities can vary exponentially with the matrix dimension; hence, under norm-based error guarantees, recovering small entries requires exponentially small error parameters, leading to prohibitively high running times. For example, a direct application of existing Laplacian solvers and fast matrix multiplication approaches requires $\Omega(mn^2)$ and $\Omega(n^{\omega+1})$ bit operations, respectively, where $m$ is the number of nonzero entries, $n$ is the matrix size, and $\omega$ is the matrix multiplication exponent. We study this problem in two distinct settings—when the input numbers are given in floating-point and fixed-point representations—and present algorithms with improved running times in both cases.
We present an algorithm that solves linear systems in sparse matrices asymptotically faster than matrix multiplication for any value of matrix multiplication exponent 𝜔 > 2. This speedup holds for any input matrix 𝐴 with 𝑜(𝑛^{𝜔−1}/ log(𝜅(𝐴))) nonzeros, where 𝜅(𝐴) is the condition number of 𝐴. For poly(𝑛)-conditioned matrices with 𝑂(𝑛) ̃ nonzeros, and the current value of 𝜔, the bit complexity of our algorithm to solve to within any 1/poly(𝑛) error is 𝑂(𝑛^2.271). Our algorithm can be viewed as an efficient, randomized implementation of the block Krylov method via recursive low displacement rank factorizations. It is inspired by an earlier algorithm for inverting matrices over finite fields. In our analysis of numerical stability, we use matrix anti-concentration techniques to bound the smallest eigenvalue and the smallest gap in eigenvalues of semi-random matrices and incorporate improved matrix anti-concentration bounds by Nie.
We will discuss efficient algorithms for estimating the eigenvalue spectrum of a matrix in two popular query models: the matrix-vector product (matvec) query model and the entrywise query model. In the matvec model, we will discuss how worst-case bounds for `moment-matching' based Krylov methods like the kernel polynomial method can be significantly improved for matrices with decaying eigenvalue spectra using a deflation-based approach. We will then discuss how the bounds obtained by this approach are essentially matched by the popular stochastic Lanczos quadrature method (SLQ), helping to explain the strong performance of this method as compared to explicit moment-matching-based methods in practice. Finally, we will briefly review eigenvalue approximation bounds in the entrywise query model -- discussing how poly(1/epsilon) entrywise samples from any matrix can be used to estimate all of its eigenvalues to small additive error. We will highlight several open questions in both query models.
We consider matrices A(t) that depend, possibly nonlinearly, on a parameter t. We present a randomized algorithm for minimizing trace(A(t)) over all t with bounded two-norm, and determine the number of samples so that the excess risk is bounded with high probability. The derivation of the bound relies on concentration of measure, uniform convergence, and discretization by epsilon nets. This is joint work with Arvind Saibaba.