Results 61 - 70 of 23714
Pan-privacy was proposed by Dwork et al. as an approach to designing a private analytics system that retains its privacy properties in the face of intrusions that expose the system's internal state. Motivated by federated telemetry applications, in this talk we will define local pan-privacy, where privacy should be retained under repeated unannounced intrusions on the local state. We will consider the problem of monitoring the count of an event in a federated system, where event occurrences on a local device should be hidden even from an intruder on that device. We’ll show that under reasonable constraints, the goal of providing information-theoretic differential privacy under intrusion is incompatible with collecting telemetry information. Finally we’ll discuss how this problem can be solved in a scalable way using standard cryptographic primitives. Joint work with Vitaly Feldman, Guy Rothblum and Kunal Talwar.
Statistical heterogeneity of data in FL has motivated the design of personalized learning, where individual (personalized) models are trained, through collaboration. We build on a statistical framework to propose adaptive methods called ADEPT, which balance local information and collaboration. We examine through this lens, personalized unsupervised learning tasks including diffusion based generative models. We also develop a different methodology for personalized diffusion models called SPIRE, which we show arises from a Gaussian mixture model heterogeneity. This also allows for lightweight adaptation for new users who did not participate in collaboration, supporting privacy through data minimization directly. We finally focus on online learning, where we first present privacy for multi-arm bandit problems. Then we present an instantiation of personalized online learning through multi-agent multi-armed bandit problems, where we demonstrate a complete characterization for regret of heterogeneous stochastic linear bandits.
Parts of this work are joint with Kaan Ozkara, Ruida Zhou, Bruce Huang and Antonious Girgis.
In this work, we identify a set of side-channels in our Confidential Federated Compute platform that a hypothetical insider could exploit to circumvent differential privacy (DP) guarantees. We show how DP can mitigate two of the side-channels, one of which has been implemented in our open-source library.
When and how can we guarantee that the conclusions arrived at by a complicated and expensive data analysis are correct? A sequence of recent works explores the possibility of constructing interactive proof systems that can verify the conclusions using less data and computation than would be needed to replicate the analysis. I will survey this line of work, highlighting positive and negative results.
Based on joint works with Tal Herman.
LLM-based agents are inherently probabilistic and ill-suited for security-critical tasks, especially in applications handling sensitive data where privacy risks often arise after access is granted. We present AgentCrypt, a framework that prioritizes privacy over correctness by addressing post-access leakage through tool calls, memory, and derived outputs. AgentCrypt introduces a three-tier architecture for fine-grained, privacy-preserving multi-agent workflows and provides formal security guarantees for tagged data. It integrates seamlessly with existing platforms, demonstrated through implementations with LangGraph and Google ADK, while remaining platform-agnostic.
An end-to-end encrypted application needs a mechanism for backing up secret keys. Existing deployed systems create a single point of privacy failure: by compromising one secure hardware device, an attacker can recover many users’ secrets. In this talk, I will describe two architectures for encrypted backups that split secrets across different system components. Both architectures are motivated by deployment constraints. First, I will present one system that splits secrets across different types of enclaves run by different cloud providers (SVR3, OSDI’24). Then, I will discuss another system that splits secrets across application clients and offloads work, but not secrets, to the application server (Chorus, IEEE S&P’26). This talk is based on joint work with Graeme Connell, Vivian Fang, Allison Li, Raluca Ada Popa, Deevashwer Rathee, and Rolfe Schmidt.
The Distributed Aggregation Protocol is a standard for privately computing statistical aggregations over measurements in multi-party computation being developed at the Internet Engineering Task Force. In this task, we'll cover what DAP is, how the network protocol layer interacts with the cryptography layer, some unexpected use cases that have emerged as well as challenges in implementing and deploying this technology. I submitted this abstract at the request of Henry Corrigan-Gibbs.
Secure Aggregation allows an untrusted server to compute aggregate statistics over large populations of users, without ever learning individual-level data. State-of-the-art secure aggregation protocols allow the vast majority of clients to send a single message, by either splitting the computation between two or more servers, or by outsourcing trust to a small committee of clients. This talk will cover the Willow secure aggregation protocol (Crypto 2025), its recent improvement WillowFold (ePrint 2026/264), and practical considerations when deploying secure aggregation at scale.