Skip to main content

Utility navigation

  • Calendar
  • Contact
  • Login
  • MAKE A GIFT
Berkeley University of California
Home Home

Main navigation

  • Programs & Events
    • Research Programs
    • Workshops & Symposia
    • Public Lectures
    • Research Pods
    • Internal Program Activities
    • Algorithms, Society, and the Law
  • Participate
    • Apply to Participate
    • Propose a Program
    • Postdoctoral Research Fellowships
    • Law and Society Fellowships
    • Science Communicator in Residence Program
    • Circles
    • Breakthroughs Workshops and Goldwasser Exploratory Workshops
  • People
    • Scientific Leadership
    • Staff
    • Current Long-Term Visitors
    • Research Fellows
    • Postdoctoral Researchers
    • Scientific Advisory Board
    • Governance Board
    • Affiliated Faculty
    • Science Communicators in Residence
    • Law and Society Fellows
    • Chancellor's Professors
  • News & Videos
    • News
    • Videos
  • Support for the Institute
    • Annual Fund
    • All Funders
    • Institutional Partnerships
  • For Visitors
    • Visitor Guide
    • Plan Your Visit
    • Location & Directions
    • Accessibility
    • Building Access
    • IT Guide
  • About

Results 51 - 60 of 23714

Shivaji Sondhi
(Princeton University)
Dominik Hahn
(University of Oxford)
People

Dominik Hahn

People

Vedika Khemani

People

Shivaji Sondhi

Image
Ehud Altman
(UC Berkeley)
Image
Umesh Vazirani
(Simons Institute, UC Berkeley)
Workshop Talk
|
Mar. 20, 2026

Personalized Federated Training of Diffusion Models with Privacy Guarantees

Diffusion models produce high-fidelity samples and have recently become the de facto approach for synthetic image generation. However, prior work shows that these models exhibit strong vulnerability to privacy attacks, including reconstruction and membership inference (e.g., Carlini et al.), which makes adoption difficult in sensitive domains such as healthcare. Unfortunately, existing approaches that apply differential privacy during training often fail to preserve the high fidelity that makes diffusion models effective. In this talk I will present a new approach for training diffusion models in the federated setting, where clients hold non-IID data and seek formal privacy guarantees. The key idea in our approach is personalization, which helps alleviate the tension between privacy and utility in federated learning. Our method exploits the coarse-to-fine refinement structure that characterizes diffusion models: a shared diffusion model learns the coarse structure that appears across clients, while client-specific models perform the finer refinements that encode client-level information. This design lets clients benefit from collaboration while preventing the shared model from reproducing any individual client’s data, since it only observes noisy privatized versions of each client’s data. The method provides formal local differential privacy guarantees for each client while empirically preserving the high fidelity of diffusion models, which allows each client to release their personalized model publicly without compromising the privacy of other clients. We also show in a toy Gaussian mixture model that collaboration in this framework improves sample quality relative to private non-collaborative training. Extensive experiments on CIFAR-10, Colorized MNIST, and CelebA support these results: the framework generates high-fidelity samples, improves performance on minority and underrepresented classes, and maintains strong protection against membership inference, memorization, and reconstruction attacks.

The talk is based on joint work with Bingqing Jiang, A F M Mahfuzul Kabir, Weitong Zhang, Difan Zou, Lingxiao Wang and will appear in CVPR 2026.

Workshop Talk
|
Mar. 20, 2026

Private Geometric Median

In this talk, I will discuss differentially private algorithms for computing the geometric median, a basic and robust estimation problem. Standard private optimization methods, such as DP gradient descent, require an a priori bound on a ball of radius R containing the data, and their error scales linearly with this worst-case radius. For the geometric median, this can be overly pessimistic: a small number of outliers may make R very large even when most datapoints lie in a much smaller region. I will show how to go beyond this worst-case dependence by designing private algorithms whose error depends instead on the effective diameter of most of the data. 

 

Workshop Talk
|
Mar. 20, 2026

Training generative models from locally privatized data via entropic optimal transport

Local differential privacy is a powerful method for privacy-preserving data collection. In this paper, we develop a framework for training Generative Adversarial Networks (GANs) on differentially privatized data. We show that entropic regularization of optimal transport - a popular regularization method in the literature that has often been leveraged for its computational benefits - enables the generator to learn the raw (unprivatized) data distribution even though it only has access to privatized samples. We prove that at the same time this leads to fast statistical convergence at the parametric rate. This shows that entropic regularization of optimal transport uniquely enables the mitigation of both the effects of privatization noise and the curse of dimensionality in statistical convergence. We provide experimental evidence to support the efficacy of our framework in practice.

Pagination

  • Previous page Previous
  • Page 4
  • Page 5
  • Current page 6
  • Page 7
  • Page 8
  • Next page Next
Home
The Simons Institute for the Theory of Computing is the world's leading venue for collaborative research in theoretical computer science.

Footer

  • Programs & Events
  • Participate
  • Workshops & Symposia
  • Contact Us
  • Calendar
  • Accessibility

Footer social media

  • Twitter
  • Facebook
  • Youtube
© 2013–2026 Simons Institute for the Theory of Computing. All Rights Reserved.
link to homepage

Main navigation

  • Programs & Events
    • Research Programs
    • Workshops & Symposia
    • Public Lectures
    • Research Pods
    • Internal Program Activities
    • Algorithms, Society, and the Law
  • Participate
    • Apply to Participate
    • Propose a Program
    • Postdoctoral Research Fellowships
    • Law and Society Fellowships
    • Science Communicator in Residence Program
    • Circles
    • Breakthroughs Workshops and Goldwasser Exploratory Workshops
  • People
    • Scientific Leadership
    • Staff
    • Current Long-Term Visitors
    • Research Fellows
    • Postdoctoral Researchers
    • Scientific Advisory Board
    • Governance Board
    • Affiliated Faculty
    • Science Communicators in Residence
    • Law and Society Fellows
    • Chancellor's Professors
  • News & Videos
    • News
    • Videos
  • Support for the Institute
    • Annual Fund
    • All Funders
    • Institutional Partnerships
  • For Visitors
    • Visitor Guide
    • Plan Your Visit
    • Location & Directions
    • Accessibility
    • Building Access
    • IT Guide
  • About

Utility navigation

  • Calendar
  • Contact
  • Login
  • MAKE A GIFT
link to homepage