Abstract

The vast majority of computer science literature in privacy can be broadly divided into two categories -- inferential, where we are trying to bound the inferences an adversary can make based on auxiliary information, and differential, where the idea is to ensure that participation of an entity or an individual does not change the outcome significantly.

In this talk, I will present two new case-studies, one in each framework. The first looks at a form of inferential privacy that allows more fine-grained control in a local setting than the individual level. The second looks at privacy against adversaries who have bounded learning capacity, and has ties to the theory of generative adversarial networks.