Results 1071 - 1080 of 23799
Multigrid methods play a key role in large-scale scientific simulation because they are among the fastest and most scalable approaches for solving systems of equations. They are widely used to solve the sparse linear systems that arise in these simulations, and they have been shown to scale efficiently on today’s supercomputers. However, this success has been primarily for symmetric positive definite systems. Developing effective methods for indefinite systems such as the Helmholtz equation remains an open problem. Several issues must be addressed: the system has both positive and negative eigenvalues (or complex), requiring special treatment in the smoother; the near-kernel is oscillatory, breaking the standard geometric smoothness assumption; and the coarse-grid correction does not form a projection. In this talk, we present recent research aimed at addressing these issues. We also provide numerical results for both Helmholtz and a shifted Laplacian problem with extremely large shifts.
Suppose one has very good preconditioners for a class of n by n matrices A – the preconditioned matrix has condition number O(1), independent of n. Could an iterative method replace the standard Gaussian elimination or QR decomposition for solving Ax = b, delivering the same (or better) accuracy? If so, then one would have a practical method for solving a class of linear systems with the same number of operations that it takes to construct the preconditioner and apply the preconditioned matrix (and its transpose if it is nonsymmetric) to a fixed number of vectors, typically O(n 2 ) but possibly O(n).
This is the question that I will explore in this talk, considering station- ary iterative methods (Jacobi, Gauss-Seidel, SOR, etc.), updated residual methods (steepest descent, conjugate gradients, etc.), and the preconditioned Lanczos algorithm, along with iterative refinement.
Can one recover a structured matrix $A$ from only matrix-vector product queries $x \mapsto Ax$ and $y \mapsto A^\top y$? If so, how many are needed? We will discuss the matrix recovery problem for common matrix families, with an emphasis on hierarchical rank-structured matrices. This problem arises in the emerging field of operator learning, where one approximates the solution operator of a PDE from only input-output pairs of forcing terms and solutions. We will conclude with some open problems resulting from this connection between matrix recovery and operator learning.