Results 231 - 240 of 23739
Federated learning must simultaneously handle statistical heterogeneity, adversarial participants, and privacy against a curious server. While each of these challenges is well understood in isolation, their interaction fundamentally changes what distributed learning algorithms can achieve.
I will present a tight characterization of the privacy–robustness–utility trade-off in distributed learning. We show that any algorithm that is both robust to a fraction of adversarial participants and locally differentially private must incur an unavoidable excess error. Beyond the separate costs of privacy and robustness, there is a coupling penalty governed by the corruption fraction and the privacy level. This phenomenon is orthogonal to statistical heterogeneity: even under homogeneous data, local privacy injects randomness that acts as artificial heterogeneity, which adversaries can exploit. I will conclude by discussing two structural responses to these limits: weakening the trust model through shared randomness, and reducing full collaboration via personalization.
This talk is based on the papers:
[AGS ICML’25] Towards Trustworthy Federated Learning with Untrusted Participants
[AGGPS ICML'23] On the Privacy-Robustness-Utility Trilemma in Distributed Learning