Results 151 - 160 of 23736
Privacy amplification by sampling reduces the noise necessary to achieve a target privacy guarantee when training a model with differential private stochastic gradient descent (DP-SGD), by compounding the randomness in forming batches with the randomness in noise addition in DP-SGD. Historically, the literature on DP-SGD has assumed batches are formed using Poisson sampling, but in practice shuffling-like methods were used, and thus the gains from privacy amplification achieved in the literature could not be achieved in practice. Recently, infrastructure has caught up and privacy amplification by sampling is now feasible in practice, but with a set of restrictions introducing new technical challenges. This talk will cover recent work on these challenges, including (i) handling a need for fixed-size batches due to JIT compilation, (ii) privacy amplification for correlated noise mechanisms, and (iii) privacy guarantees that are robust to side channel information.
Modern machine-learning and AI systems are tremendously useful, but they bring with them an array of new privacy, security, and trust concerns. Complicating the situation ever further is that many learning systems today operate in decentralized settings...