Abstract

Building upon my previous talk on information leakage measures and their value in providing privacy guarantees against learning adversaries, in this talk, I will discuss the robustness of privacy guarantees that can be made via alpha leakage measures when designing mechanisms from a finite number of samples. The talk will also highlight recent work by researchers in the information theory community on noise adding mechanisms. Finally, we will focus on the significance of the adversarial model in understanding both mechanism design and the guarantees provided.

Mechanism design is based on joint work with Hao Wang, Mario Diaz, and Flavio Calmon. Additive noise mechanisms describes the work of Prakash Narayan and Arvind Nageswaran. Adversarial models and loss functions is based on work with Tyler Sypherd, Mario Diaz, and Peter Kairouz.

Video Recording