Skip to main content

Utility navigation

  • Calendar
  • Contact
  • Login
  • MAKE A GIFT
Berkeley University of California
Home Home

Main navigation

  • Programs & Events
    • Research Programs
    • Workshops & Symposia
    • Public Lectures
    • Research Pods
    • Internal Program Activities
    • Algorithms, Society, and the Law
  • Participate
    • Apply to Participate
    • Propose a Program
    • Postdoctoral Research Fellowships
    • Law and Society Fellowships
    • Science Communicator in Residence Program
    • Circles
    • Breakthroughs Workshops and Goldwasser Exploratory Workshops
  • People
    • Scientific Leadership
    • Staff
    • Current Long-Term Visitors
    • Research Fellows
    • Postdoctoral Researchers
    • Scientific Advisory Board
    • Governance Board
    • Affiliated Faculty
    • Science Communicators in Residence
    • Law and Society Fellows
    • Chancellor's Professors
  • News & Videos
    • News
    • Videos
  • Support for the Institute
    • Annual Fund
    • All Funders
    • Institutional Partnerships
  • For Visitors
    • Visitor Guide
    • Plan Your Visit
    • Location & Directions
    • Accessibility
    • Building Access
    • IT Guide
  • About

Results 61 - 70 of 23714

Workshop Talk
|
Mar. 20, 2026

Local Pan-Privacy for Federated Analytics

Pan-privacy was proposed by Dwork et al. as an approach to designing a private analytics system that retains its privacy properties in the face of intrusions that expose the system's internal state. Motivated by federated telemetry applications, in this talk we will define local pan-privacy, where privacy should be retained under repeated unannounced intrusions on the local state. We will consider the problem of monitoring the count of an event in a federated system, where event occurrences on a local device should be hidden even from an intruder on that device. We’ll show that under reasonable constraints, the goal of providing information-theoretic differential privacy under intrusion is incompatible with collecting telemetry information. Finally we’ll discuss how this problem can be solved in a scalable way using standard cryptographic primitives. Joint work with Vitaly Feldman, Guy Rothblum and Kunal Talwar. 

Workshop Talk
|
Mar. 20, 2026

Personalized Federated Diffusion models and online learning with connections to privacy

Statistical heterogeneity of data in FL has motivated the design of personalized learning, where individual (personalized) models are trained, through collaboration. We build on a statistical framework to propose adaptive methods called ADEPT, which balance local information and collaboration. We examine through this lens, personalized unsupervised learning tasks including diffusion based generative models. We also develop a different methodology for personalized diffusion models called SPIRE, which we show arises from a Gaussian mixture model heterogeneity. This also allows for lightweight adaptation for new users who did not participate in collaboration, supporting privacy through data minimization directly. We finally  focus on online learning, where we first present privacy for multi-arm bandit problems. Then we present an  instantiation of personalized online learning through multi-agent multi-armed bandit problems, where we demonstrate a complete characterization for regret of heterogeneous stochastic linear bandits. 

Parts of this work are joint with Kaan Ozkara, Ruida Zhou, Bruce Huang and Antonious Girgis.

Workshop Talk
|
Mar. 20, 2026

Hardening Confidential Federated Computations against Side-Channel Attacks (Virtual Talk)

In this work, we identify a set of side-channels in our Confidential Federated Compute platform that a hypothetical insider could exploit to circumvent differential privacy (DP) guarantees. We show how DP can mitigate two of the side-channels, one of which has been implemented in our open-source library.

Workshop Talk
|
Mar. 20, 2026

Verifiable Data Science

When and how can we guarantee that the conclusions arrived at by a complicated and expensive data analysis are correct? A sequence of recent works explores the possibility of constructing interactive proof systems that can verify the conclusions using less data and computation than would be needed to replicate the analysis. I will survey this line of work, highlighting positive and negative results.

Based on joint works with Tal Herman.

Video
|
Mar. 20, 2026
Personalized Federated Training of Diffusion Models with Privacy Guarantees
Workshop Talk
|
Mar. 19, 2026

AgentCrypt: Advancing Privacy and (Secure) Computation in AI Agent Collaboration

LLM-based agents are inherently probabilistic and ill-suited for security-critical tasks, especially in applications handling sensitive data where privacy risks often arise after access is granted. We present AgentCrypt, a framework that prioritizes privacy over correctness by addressing post-access leakage through tool calls, memory, and derived outputs. AgentCrypt introduces a three-tier architecture for fine-grained, privacy-preserving multi-agent workflows and provides formal security guarantees for tagged data. It integrates seamlessly with existing platforms, demonstrated through implementations with LangGraph and Google ADK, while remaining platform-agnostic.

Workshop Talk
|
Mar. 19, 2026

Talk by

Workshop Talk
|
Mar. 19, 2026

Splitting Secrets for Encrypted Backups

An end-to-end encrypted application needs a mechanism for backing up secret keys. Existing deployed systems create a single point of privacy failure: by compromising one secure hardware device, an attacker can recover many users’ secrets. In this talk, I will describe two architectures for encrypted backups that split secrets across different system components. Both architectures are motivated by deployment constraints. First, I will present one system that splits secrets across different types of enclaves run by different cloud providers (SVR3, OSDI’24). Then, I will discuss another system that splits secrets across application clients and offloads work, but not secrets, to the application server (Chorus, IEEE S&P’26). This talk is based on joint work with Graeme Connell, Vivian Fang, Allison Li, Raluca Ada Popa, Deevashwer Rathee, and Rolfe Schmidt.

Workshop Talk
|
Mar. 19, 2026

Distributed Aggregation Protocol: A Standard for Privacy-Preserving Aggregation in MPC

The Distributed Aggregation Protocol is a standard for privately computing statistical aggregations over measurements in multi-party computation being developed at the Internet Engineering Task Force. In this task, we'll cover what DAP is, how the network protocol layer interacts with the cryptography layer, some unexpected use cases that have emerged as well as challenges in implementing and deploying this technology. I submitted this abstract at the request of Henry Corrigan-Gibbs.

Workshop Talk
|
Mar. 19, 2026

Secure Aggregation with Lightweight Committees

Secure Aggregation allows an untrusted server to compute aggregate statistics over large populations of users, without ever learning individual-level data. State-of-the-art secure aggregation protocols allow the vast majority of clients to send a single message, by either splitting the computation between two or more servers, or by outsourcing trust to a small committee of clients. This talk will cover the Willow secure aggregation protocol (Crypto 2025), its recent improvement WillowFold (ePrint 2026/264), and practical considerations when deploying secure aggregation at scale.

Pagination

  • Previous page Previous
  • Page 5
  • Page 6
  • Current page 7
  • Page 8
  • Page 9
  • Next page Next
Home
The Simons Institute for the Theory of Computing is the world's leading venue for collaborative research in theoretical computer science.

Footer

  • Programs & Events
  • Participate
  • Workshops & Symposia
  • Contact Us
  • Calendar
  • Accessibility

Footer social media

  • Twitter
  • Facebook
  • Youtube
© 2013–2026 Simons Institute for the Theory of Computing. All Rights Reserved.
link to homepage

Main navigation

  • Programs & Events
    • Research Programs
    • Workshops & Symposia
    • Public Lectures
    • Research Pods
    • Internal Program Activities
    • Algorithms, Society, and the Law
  • Participate
    • Apply to Participate
    • Propose a Program
    • Postdoctoral Research Fellowships
    • Law and Society Fellowships
    • Science Communicator in Residence Program
    • Circles
    • Breakthroughs Workshops and Goldwasser Exploratory Workshops
  • People
    • Scientific Leadership
    • Staff
    • Current Long-Term Visitors
    • Research Fellows
    • Postdoctoral Researchers
    • Scientific Advisory Board
    • Governance Board
    • Affiliated Faculty
    • Science Communicators in Residence
    • Law and Society Fellows
    • Chancellor's Professors
  • News & Videos
    • News
    • Videos
  • Support for the Institute
    • Annual Fund
    • All Funders
    • Institutional Partnerships
  • For Visitors
    • Visitor Guide
    • Plan Your Visit
    • Location & Directions
    • Accessibility
    • Building Access
    • IT Guide
  • About

Utility navigation

  • Calendar
  • Contact
  • Login
  • MAKE A GIFT
link to homepage