Results 91 - 100 of 23736
The Distributed Aggregation Protocol is a standard for privately computing statistical aggregations over measurements in multi-party computation being developed at the Internet Engineering Task Force. In this task, we'll cover what DAP is, how the network protocol layer interacts with the cryptography layer, some unexpected use cases that have emerged as well as challenges in implementing and deploying this technology. I submitted this abstract at the request of Henry Corrigan-Gibbs.
Secure Aggregation allows an untrusted server to compute aggregate statistics over large populations of users, without ever learning individual-level data. State-of-the-art secure aggregation protocols allow the vast majority of clients to send a single message, by either splitting the computation between two or more servers, or by outsourcing trust to a small committee of clients. This talk will cover the Willow secure aggregation protocol (Crypto 2025), its recent improvement WillowFold (ePrint 2026/264), and practical considerations when deploying secure aggregation at scale.
Large-scale online services increasingly rely on aggregating sensitive user data, for example to support analytics or federated learning. Private aggregation enables such computation without revealing any individual client’s input, but many established designs typically rely on distributing trust among a small set of non-colluding servers or auxiliary infrastructure, which complicates deployment. In this talk, I present a line of work that rethinks private aggregation through client-decentralized trust. The key insight is that, the large population of participating clients can be leveraged as a resource to decentralize trust, and enables aggregation with a single untrusted server. This design choice, however, introduces a central challenge: how to keep the clients lightweight enough for practical deployment on resource-constrained devices. I will describe two systems that embody this approach and address this challenge. Flamingo is a fast multi-round private aggregation system tailored to federated learning setting. Armadillo extends this line of work with robustness against disruptive clients in the system.
Private-aggregation systems allow a company to collect valuable telemetry data from their users without ever having to collect sensitive disaggregated user data in the clear. The past few years have seen a flurry of activity around private aggregation: practical constructions, draft IETF standards, and proof-of-concept deployments by Apple, Google, and Mozilla. In spite of this progress, few of the apps we use today actually collect telemetry data using private aggregation. This talk will try to answer two questions: Why is the real-world use of private-aggregation systems so limited? And what can we do about it? To do so, we will draw on our experience designing private-aggregations systems and on conversations with engineers at Apple, Cloudflare, Google, Mozilla, and ISRG who have deployed them.
WIP abstract: The recent revolution in advanced data analytics and machine learning have made it possible to extract unprecedented value from user data. However, this comes at the cost of user privacy in many application workflows. In this talk, I will discuss some ideas around building privacy-preserving inference systems via a co-design of systems and cryptography. In the first part of the talk, I will present Bolt (IEEE S&P 2024), a new system for privacy-preserving two-party inference for a large language model like BERT using secure multiparty computation (MPC). With our system, a user can safely outsource prediction to a third party without revealing their sensitive data and or learning about the third party’s proprietary model parameters. In the second part, I will talk about building systems that can enable the development of programmable privacy-preserving inference systems. In Rotom (USENIX Security 2026), we develop a compilation framework that autovectorizes tensor programs into optimized homomorphic encryption (HE) programs. Rotom systematically explores a wide range of layout assignments, applies state-of-the-art optimizations, and automatically generates an equivalent, efficient HE program.