![Bridging Continuous and Discrete Optimization_hi-res logo](/sites/default/files/styles/workshop_banner_sm_1x/public/2023-01/Bridging%20Continuous%20and%20Discrete%20Optimization_hi-res.png.jpg?itok=b7fmT0eV)
Abstract
The problem of subdeterminant maximization subject to combinatorial constraints arises in various algorithmic and machine learning settings and, recently, convex-programming based estimation algorithms for it have been proposed. The question of whether these estimation algorithms can be converted into the more useful approximation algorithms -- that also output a set -- remained open.
We present novel nonconvex formulations for the constrained subdeterminant maximization problem and prove a new anti-concentration inequality for dependent random variables that together allow us to obtain approximation algorithms for this problem in a variety of settings.
Based on a joint work with Javad Ebrahimi and Damian Straszak available https://arxiv.org/abs/1707.02757