Results 1091 - 1100 of 23799
The spectrum of a matrix compression Q*AQ can be much more sensitive to small perturbations than the original matrix A, posing a challenge to the analysis of Rayleigh-Ritz methods. This talk considers the pseudospectrum of Q*AQ for Haar randomly sampled Q, i.e. the set of eigenvalues of all perturbations to Q*AQ. We describe some mild conditions under which its expected area is small.
Motivated by the resurgence of stochastic rounding, we consider stochastic nearness rounding of tall and thin real matrices. We provide theoretical and empirical evidence showing that, with high probability, the smallest singular value of a stochastically rounded matrix is bounded away from zero -- regardless of how close the original matrix was to being rank-deficient and even if it were rank-deficient. In other words, stochastic rounding implicitly regularizes tall-and-thin matrices so that the rounded version has full rank. We will briefly discuss the implications of such results for solving regression problems.
Sparse factorization algorithms, such as incomplete Cholesky factorization, are most commonly used for sparse problems, with sparsity patterns derived from the sparsity graph of the original matrix. In this talk, I will present a probabilistic perspective for identifying sparse factorization of dense positive-definite matrices. Cholesky factors encode conditional independence. Thus, the conditional independence of densely correlated Gaussian vectors directly translates to sparse Cholesky factorizations of dense covariance matrices. In certain spatial (statistical) problems, the screening effect provides a powerful heuristic for identifying conditional independence and thus discovering fast algorithms, asymptotically and in practice.
Kyng's randomized approximate Cholesky (AC) factorization can be better preconditioners than classical incomplete factorizations (IC) for graph Laplacian matrices and related matrices. One possible advantage of AC is that it may be better, on average, at preserving row sums than IC, i.e., if A is the matrix, L is an approximate or incomplete Cholesky factor, and e is the vector of all ones, L L' e is closer to A e for AC than IC, for similar numbers of nonzero entries. This begs a comparison of AC with modified IC (MIC), which is designed to preserve row sums. AC may still be better in this case, which means that preserving row sums is not the entire story. We will attempt to give a full explanation with numerical experiments with different modified/unmodified incomplete factorizations (threshold and level-based) and different matrix orderings. We will also propose some potential applications of AC in stochastic particle simulations that take advantage of the fact that its factorization is exact in expectation.
Donya Saless is a PhD candidate in Computer Science at the Toyota Technological Institute at Chicago (TTIC). Her research explores well-defined aspects of safety and trustworthiness in machine learning, including robustness, reliability, and explainability...
Alireza is currently a PhD student at Rice University. His research interests lie in theoretical machine learning, statistical learning, and federated learning. His work explores the mathematical and algorithmic foundations that ensure reliable, efficient...