Abstract

Sparsity is a popular and widely explored way to study the role of dimensionality reduction in signal reconstruction tasks. Recently, generative models defined by deep networks such as Variational auto-encoders (VAEs) and Generative Adversarial Networks (GANs) has gained in popularity as an alternative parameterisations for low-dimensional latent representations of signals. In this talk, I give a detailed account of how an analytically tractable deep generative network helps reconstruction in three popular inverse problems: spiked matrix factorisation, compressed sensing and phase retrieval. I will discuss the performance of the optimal Bayesian estimator and introduce a tailored message passing algorithm for approximating the marginals of the posterior. In particular, this will allow us to characterise and compare the statistical-to-computational gap (or lack of thereof) of generative priors with their sparse counterparts.

Attachment

Video Recording