Abstract

Two common approaches in low-rank optimization problems are either working directly with a rank constraint on the matrix variable, or optimizing over a low-rank factorization so that the rank constraint is implicitly ensured. In this talk, we show the natural connection between the rank constrained and factorized approaches. In particular, we show that all second-order stationary points of the factorized objective function correspond to fixed points of projected gradient descent run on the original problem (where the projection step enforces the rank constraint). This result allows us to unify many existing optimization guarantees that have been proved specifically in either the rank-constrained or the factorized setting, and leads to new results for certain settings of the problem. A major tool for handling the low-rank constraint is the local concavity coefficient, which aims to measure the concavity of a rank-constrained space. We demonstrate the application of our results to several concrete low-rank optimization problems arising in the matrix inverse problems.