Abstract
After Luca hosted a workshop in Milan in the summer of 2022, we took the train together to Rome. Along the way we talked about sparsification, random sampling, and mirror descent. After a party Luca and Junce hosted, half of us got covid, and I spent a week quarantined in my hotel room sending Luca hastily-drafted arguments, which he read despite being very clear that there are more appropriate ways to spend time when one is sick. (Last summer at Simons, we finally sat together and worked out the details.)
Those conversations revolved around the following question:
Given a sum of functions F = f_1 + ... + f_m on R^n, how does one choose probabilities so that
independent subsampling of the terms leads to a sparse approximation to F? This question has deep connections to high-dimensional geometry and functional analysis, and a range of applications in fast algorithms for linear regression and optimizing generalized linear models.
Based on joint work with Arun Jambulapati, Yang Liu, and Aaron Sidford.