Abstract

Distributed source coding (DSC) is the task of encoding an input in the absence of correlated side information that is only available to the decoder. Remarkably, Slepian and Wolf showed in 1973 that an encoder without access to the side information can asymptotically achieve the same compression rate as when the side information is available. While there is vast prior work on this topic, practical DSC has been limited to synthetic datasets and specific correlation structures. In this talk, we present a framework for lossy DSC that is agnostic to the correlation structure and can scale to high dimensions. Rather than relying on hand-crafted source modeling, our method utilizes a conditional Vector-Quantized Variational AutoEncoder (VQ-VAE) to learn the distributed encoder and decoder. We evaluate our method on multiple datasets and show that our method can handle complex correlations and achieve state-of-the-art PSNR. This is based on joint work with Jay Whang, Alliot Nagle, Anish Acharya, and Alex Dimakis.

Video Recording