Results 1251 - 1260 of 23832
Non-commutative optimization is the study of optimization problems arising from the action of a group on a vector space. Such optimization problems turn out to have applications in several areas, such as computer science, quantum information, machine learning, and mathematics, among several others. In this first talk, we will give a gentle introduction to non-commutative optimization via two very well-studied problems: matrix and operator scaling.
Tensor networks are objects that give a rigorous pictorial perspective that has been exploited in a surprising variety of areas including condensed matter physics, quantum computation, knot theory, and subfactor theory. At their core, tensor networks...
Algorithms have two costs: arithmetic and communication, i.e. moving data between levels of a memory hierarchy or processors over a network.
Communication costs (measured in time or energy per operation) greatly exceed arithmetic costs, so our goal is to design algorithms that minimize communication.
We survey some known algorithms that communicate asymptotically less than their classical counterparts, for a variety of linear algebra and machine learning problems, often attaining lower bounds. We also discuss recent work on automating the design and implementation of these algorithms, and open problems.
The workshop will focus on applications of pseudorandomness and high dimensional expanders to areas in computer science and mathematics. A tentative list of areas covered will be applications in coding theory, in particular to local codes both classical...
Matrix multiplication is an essential computational algorithm used in scientific computation, AI and many other fields. Despite over five decades of research on sub-cubic time algorithms, most current math libraries and state-of-the-art hardware accelerators still use the cubic-time classic algorithm for matrix multiplication. This means that scientific libraries and industry standards rely on a suboptimal solution for both performance and power consumption. Why is that?
One reason for this failure is that many of the sub-cubic algorithms are applicable only to matrices of enormous dimension, and have large hidden constants in arithmetic complexity. This makes them impractical. Other challenges include communication costs, numerical stability issues, and poor hardware-software fit.
In this talk I will present a brief history of the ongoing race for matrix multiplication algorithms that are faster in practice.