Abstract

We will discuss two fundamental inference tasks, distribution learning, and testing under Local Differential Privacy (LDP). 

We start by considering density estimation (distribution learning). We will propose two private-coin LDP schemes, which require log k and one bit of communication per player respectively, while still being sample-optimal. This is contrast with all known schemes (e.g., [EPK14, DJW13, KBW16, YB18]) that require \Omega(k) communication. It is also known that the availability of public-coins does not change the sample complexity of learning.

We then consider distribution testing (goodness-of-fit) problem, where the goal is to test whether a distribution is equal to a known reference distribution, or far from it. Unlike distribution learning, we show that the sample complexity reduces dramatically for public-coin schemes, compared to private-coin protocols. 

To prove the lower bounds, we reduce the problem to bounding a new quantity we define, the chi^2 contraction. For public-coin schemes, we bound the min-max chi^2 contraction, and for private-coin schemes, we bound max-min chi^2 contraction. Here the min is over appropriately chosen priors, and  and max is over the LDP schemes.

Joint works with Clement Canonne, Cody Freitag, Ziteng Sun, Himanshu Tyagi, and Huanyu Zhang. 

Video Recording