Abstract

In this talk we establish a connection between some notions of algorithmic stability and differential privacy. The main thesis is that a stable algorithm (under certain notions of stability) can be transformed into a differentially private algorithm with good utility guarantee. In particular, we discuss two notions of stability: i) perturbation stability, and ii) sub-sampling stability. Based on these notions of stability, we provide two generic approaches for the design of differentially private algorithms.

In the second part of the talk, we use the generic approaches designed in the first part to the problem of sparse linear regression in high-dimensions. We show that one can design differentially private model (feature) selection algorithms for the above problem. Moreover these algorithms have (nearly) optimal sample complexity. We use the celebrated LASSO estimator as our basic building block.

Based on joint work with Adam Smith.

Video Recording