Results 1051 - 1060 of 23799
A litany of theoretical and numerical results have established the sketch-and-precondition paradigm as a powerful approach to solving large linear regression problems in standard computing environments. Perhaps surprisingly, much less work has been done on understanding how sketch-and-precondition performs on graphics processing unit (GPU) systems. We discuss our recent benchmarking of a sketch-and-precondition implementation based on sparse sign-sketches on single and multi-GPU systems. In doing so, we describe a novel, easily parallelized, rejection-sampling based method for generating sparse sign sketches. Our approach, which is particularly well-suited for GPUs, is easily adapted to a variety of computing environments. Taken as a whole, our numerical experiments indicate that sketch-and-precondition with sparse sign sketches is particularly well-suited for GPUs, and may be suitable for use in black-box least-squares solvers.
Iterative methods for solving large systems of linear equations can exhibit extremely slow performance on poorly conditioned matrices. Fortunately, this phenomenon can be addressed for a broad class of problems where the spectrum exhibits low-rank structure, which arise for instance in kernel methods, second-order optimization, or as a result of the underlying properties of the data. A key paradigm which enables tackling low-rank structured problems is randomization, i.e., introducing stochasticity into the algorithmic procedure via techniques such as matrix sketching and sampling. In this talk, I will formalize the task of solving a linear system with low-rank structure, and discuss our recent work on designing and analyzing the complexity of randomized algorithms for preconditioning and solving such systems. In particular, I will show how these methods can outperform state-of-the-art deterministic solvers, such as those based on Krylov subspace methods, both in theory and in practice.
We survey contemporary open problems in quasiperiodic materials and their spectral properties. Beginning with foundational toy models—such as the almost Mathieu operator, perhaps the simplest quasiperiodic operator exhibiting non-trivial spectral behavior, and the so-called Scottish flag operator—we explore key conjectures and unresolved questions where even numerical evidence is still lacking. We then review broader classes of quasiperiodic operators, highlighting how their spectral and dynamical properties influence the numerical approximability of their spectra. Finally, we turn to moiré materials, where quasiperiodicity plays a central role, and outline natural spectral questions that emerge in this rapidly developing field.
We consider a randomized strong rank-revealing QR factorization and show that it effectively reveals the spectral properties of a matrix M. This factorization, already known in the literature and related to the sketch and precondition approach, can be used to address problems such as selecting a subset of the columns of M, computing its low-rank approximation, estimating its rank, or approximating its null space. Given a random sketching matrix S that satisfies the eps-subspace embedding property for range(M), we show that the strong rank revealing QR factorization of B=SM allows to select columns that capture the spectrum of M with strong rank-revealing guarantees, making it particularly effective for approximating its singular values. We further discuss an efficient approach for computing the strong rank-revealing QR factorization of B through an additional sketch that relies on a sparse subspace embedding. The resulting bounds exhibit a reduced dependence on the number of columns of B compared to those obtained from the strong rank-revealing QR factorization of B. Moreover, when the leverage scores are known, such as for orthogonal matrices, or can be efficiently approximated, the bounds become entirely independent of the column dimension.
This talk addresses two topics: (a) matrix functions in the sense of Higham, and (b) nonlinear eigenvalue problems with eigenvector nonlinearities, such as those arising in electronic structure calculations. The focus is on a recent class of methods for solving the nonlinear eigenvalue problem by evaluating matrix functions of large, dense matrices, particularly in settings where we aim to compute a large invariant subspace. In these situations, the matrix-matrix product is the dominating computational component, and the evaluation should be done with as few matrix-matrix multiplications as possible. We present an approach for achieving this by characterizing the univariate semi-algebraic set of polynomials whose corresponding matrix polynomials can be evaluated with a fixed (low) number of matrix-matrix multiplications.
The first step when solving an infinite-dimensional eigenvalue problem is often to discretize it. In this talk, we will show that one must be extremely careful when discretizing linear and nonlinear eigenvalue problems to avoid introducing spurious eigenvalues and missing spectra. We propose a solver for general holomorphic infinite-dimensional linear and nonlinear eigenvalue problems using contour-based methods that avoids discretization issues.
This is joint work with Andrew Horning and Matthew Colbrook.