Results 1241 - 1250 of 23832
When, and at what cost, can we solve linear algebra problems, or even just evaluate multivariate polynomials, with high relative accuracy? Suppose we only know that we can perform the 4 basic operations (+, -, *, /) with high relative accuracy, i.e. rnd(a...
The column subset selection problem (CSSP) is an essential target of research in low-rank approximation. I will describe how our particular motivation in reduced order models of Markov chains has led to new theory and algorithms for this long-standing...
Rank-revealing matrix factorizations play a key role in applications ranging from structured low-rank approximation to computing localized basis functions in computational quantum chemistry. We will discuss some motivating applications to illustrate the...
We present an adaptive, partially matrix-free, hierarchical matrix construction framework using a broader class of Johnson–Lindenstrauss (JL) sketching operators. On the theoretical side, we extend the earlier concentration bounds to all JL sketching...
The restarted Arnoldi method iteratively applies polynomial filters to enhance the orientation of the starting vector toward the desired invariant subspace. At each step the roots of these filters are the Ritz values that least resemble the desired...
The Multiple Relatively Robust Representations (MRRR or MR^3) algorithm is optimal for the symmetric tridiagonal eigenvalue problem -- that is, it can compute k eigenpairs (with numerically orthogonal eigenvectors) in only O(nk) operations. Accordingly, MR...
We continue the discussion of the complexity of matrix multiplication, focusing on the question of lower bounds. In view of the results of Strassen and Bini, in order to understand omega it is enough to understand the rank or the border rank of the matrix multiplication tensor. We will focus our attention on two techniques for border rank lower bounds (and thus also ordinary rank lower bounds) which have been successful when applied to the matrix multiplication tensor: Koszul flattenings and border apolarity. The method of Koszul flattenings associates to a tensor of interest a matrix and relates the rank of the matrix with the rank of the tensor. Border apolarity asserts the existence of a kind of auxiliary data which exists whenever a border rank decomposition exists, and then refutes the existence of this auxiliary data to obtain the nonexistence of a border rank decomposition.
When discussing these methods, the natural symmetry of the problem plays an essential role. For instance, both rank and border rank are invariant under changes of bases in the three tensor factors, so it is not surprising that both techniques are also invariant in their own senses under this symmetry group. Border apolarity, however, goes further, and is only practically applicable in view of the relatively large symmetry group of the matrix multiplication tensor itself, which allows normalization of auxiliary data it wishes to rule out.
We continue the discussion of the complexity of matrix multiplication, focusing on the question of lower bounds. In view of the results of Strassen and Bini, in order to understand omega it is enough to understand the rank or the border rank of the matrix multiplication tensor. We will focus our attention on two techniques for border rank lower bounds (and thus also ordinary rank lower bounds) which have been successful when applied to the matrix multiplication tensor: Koszul flattenings and border apolarity. The method of Koszul flattenings associates to a tensor of interest a matrix and relates the rank of the matrix with the rank of the tensor. Border apolarity asserts the existence of a kind of auxiliary data which exists whenever a border rank decomposition exists, and then refutes the existence of this auxiliary data to obtain the nonexistence of a border rank decomposition.
When discussing these methods, the natural symmetry of the problem plays an essential role. For instance, both rank and border rank are invariant under changes of bases in the three tensor factors, so it is not surprising that both techniques are also invariant in their own senses under this symmetry group. Border apolarity, however, goes further, and is only practically applicable in view of the relatively large symmetry group of the matrix multiplication tensor itself, which allows normalization of auxiliary data it wishes to rule out.
In this talk, we will study the general setting of non-commutative optimization, seeing how matrix and operator scaling fall as special cases of the general theory. We will introduce the general problems of interest and survey what is known about them, mention some applications and connections to other areas, as well as pose some open questions.