Abstract

Consider the sparse approximation or best subset selection problem: Given a vector y and a matrix A, find a k-sparse vector x that minimizes the residual ||Ax-y||. This sparse linear regression problem, and related variants, plays a key role in high dimensional statistics, compressed sensing, and more. In this talk we focus on the trimmed lasso penalty, defined as the L_1 norm of x minus the L_1 norm of its top k entries in absolute value. We advocate using this penalty by deriving sparse recovery guarantees for it, and by presenting a practical approach to optimize it. Our computational approach is based on the generalized soft-min penalty, a smooth surrogate that takes into account all possible k-sparse patterns. We derive a polynomial time algorithm to compute it, which in turn yields a novel method for the best subset selection problem. Numerical simulations illustrate its competitive performance compared to current state of the art.

Attachment

Video Recording