Abstract

In many data-sharing or learning applications, ensuring unnecessary inference or inappropriate use of sensitive data is essential while simultaneously guaranteeing usefulness of the data. It is now well accepted that randomizing mechanisms are needed to ensure privacy or fairness. In this talk, we will discuss a recently introduced class of leakage measures that allow quantifying the information a learning adversary can infer from a post-randomized dataset. In particular, we will focus on maximal alpha leakage as a new class of adversarially motivated tunable leakage measures that is based on guessing an arbitrary function of a dataset conditioned on the released dataset. The choice of alpha determines the specific adversarial action ranging from refining a belief for alpha = 1 to guessing the best posterior for alpha = ∞. Relationship of this measure to mutual information, maximal leakage, maximal information, Renyi DP, and local DP will be discussed, in particular from the viewpoint of adversarial actions. The tutorial-style talk will also include discussion of adversarial knowledge of side information as well as the consequences of using this measure to design privacy mechanisms.

This is joint work with Jiachun Liao, Oliver Kosut, and Flavio Calmon.

Video Recording