Abstract

How much information is "leaked" in a side channel?  Despite decades of work on these channels, including the development of many sophisticated mitigation mechanisms for specific side channels, the fundamental question of how to measure the key quantity of interest---leakage---has received surprisingly little attention. Many metrics have been used in the literature, but these metrics either lack a cogent operational justification or mislabel systems
that are obviously insecure as secure.

We propose a new metric called "maximal leakage," defined as the logarithm of the multiplicative increase, upon observing the public data, of the probability of correctly guessing a randomized function of the private information, maximized over all such randomized functions.
We provide an operational justification for this definition, show how it can be computed in practice, and discuss how it relates to existing metrics, including mutual information, local differential privacy, and a certain under-appreciated metric in the computer science literature. We also present some structural results for optimal mechanisms under this metric. Among other findings, we show that mutual information underestimates leakage while local differential privacy overestimates it.

This is joint work with Ibrahim Issa, Sudeep Kamath, Ben Wu, and Ed Suh.