Abstract

Matrix completion is the study of recovering an underlying matrix from a sparse subset of noisy observations. Traditionally, it is assumed that the entries of the matrix are "missing completely at random'' (MCAR), i.e., each entry is revealed at random, independent of everything else, with uniform probability. This is likely unrealistic due to the presence of "latent confounders'', i.e., unobserved factors that determine both the entries of the underlying matrix and the missingness pattern in the observed matrix. For example, in the context of movie recommender systems---a canonical application for matrix completion---a user who vehemently dislikes horror films is unlikely to ever watch horror films.

In general, these confounders yield "missing not at random'' (MNAR) data, which can severely impact any inference procedure that does not correct for this bias. Inspired by a recent development in the causal inference literature, we develop a formal causal model for matrix completion through the language of potential outcomes, and provide novel identification arguments for a variety of causal estimands of interest. We design a procedure, which we call "synthetic nearest neighbors'' , to estimate these causal estimands. We prove finite-sample consistency and asymptotic normality of our estimator. Our analysis also leads to new theoretical results for the matrix completion literature.

In particular, we establish entry-wise, i.e., max-norm, finite-sample consistency and asymptotic normality results for matrix completion with MNAR data. Across simulated and real data, we demonstrate the efficacy of our proposed estimator.

This is based on joint work with Anish Agarwal (MIT), Munzer Dahleh (MIT) and Dennis Shen (UC Berkeley).

Video Recording