Abstract

We present Generative Adversarial Privacy and Fairness (GAPF), a data-driven framework for learning private and fair representations of large-scale datasets. GAPF leverages recent advances in generative adversarial networks (GANs) to allow a data holder to learn ``universal'' data representations that decouple a set of sensitive attributes from the rest of the dataset. Under GAPF, finding the optimal privacy/fairness mechanism is formulated as a constrained minimax game between a private/fair encoder and an adversary. We show that for appropriately chosen adversarial loss functions, GAPF provides privacy guarantees against information-theoretic adversaries and enforces demographic parity. We also evaluate the performance of GAPF on the GENKI and CelebA face datasets and the Human Activity Recognition (HAR) dataset.

Based on joint work with Chong Huang (ASU), Xiao Chen and Ram Rajagopal (Stanford), and Lalitha Sankar (ASU).

Video Recording