Non-negative matrix factorization is a ubiquitous machine learning primitive that concerns decomposing data as a non-negative combination of features, and is known to be NP-hard in the worst case. However, in practice, a popular family of heuristics to solve it proceed by simply alternatingly decomposing the samples using the current estimated features (colloquially called decoding), and updating the features based on the decoding.
This work proposes a fairly weak generative model for the data and provides guarantees on such a "decode-and-update" type of algorithm for non-negative matrix factorization. Importantly, the generative model does not require separability assumptions on the feature matrix, which is often the case in prior provable works on non-negative matrix factorization. Interestingly, our analysis does not fall into the "local strong convexity" framework for analyzing non-convex optimization.
Based on joint work with Yuanzhi Li and Andrej Risteski.