Results 81 - 90 of 23736
Statistical heterogeneity of data in FL has motivated the design of personalized learning, where individual (personalized) models are trained, through collaboration. We build on a statistical framework to propose adaptive methods called ADEPT, which balance local information and collaboration. We examine through this lens, personalized unsupervised learning tasks including diffusion based generative models. We also develop a different methodology for personalized diffusion models called SPIRE, which we show arises from a Gaussian mixture model heterogeneity. This also allows for lightweight adaptation for new users who did not participate in collaboration, supporting privacy through data minimization directly. We finally focus on online learning, where we first present privacy for multi-arm bandit problems. Then we present an instantiation of personalized online learning through multi-agent multi-armed bandit problems, where we demonstrate a complete characterization for regret of heterogeneous stochastic linear bandits.
Parts of this work are joint with Kaan Ozkara, Ruida Zhou, Bruce Huang and Antonious Girgis.
Aligning AI systems with human values remains a fundamental challenge, but does our inability to create perfectly aligned models preclude obtaining the benefits of alignment? I will present a strategic setting where a human user interacts with multiple differently misaligned AI agents, none of which are individually well-aligned. Nonetheless, when the user's utility lies approximately within the convex hull of the agents' utilities, a condition that becomes easier to satisfy as model diversity increases, strategic competition can yield outcomes comparable to interacting with a perfectly aligned model. I will then move to a setting with multiple heterogeneous users and discuss how the role of model personalization affects emergent pluralistic alignment.
In this work, we identify a set of side-channels in our Confidential Federated Compute platform that a hypothetical insider could exploit to circumvent differential privacy (DP) guarantees. We show how DP can mitigate two of the side-channels, one of which has been implemented in our open-source library.
When and how can we guarantee that the conclusions arrived at by a complicated and expensive data analysis are correct? A sequence of recent works explores the possibility of constructing interactive proof systems that can verify the conclusions using less data and computation than would be needed to replicate the analysis. I will survey this line of work, highlighting positive and negative results.
Based on joint works with Tal Herman.
LLM-based agents are inherently probabilistic and ill-suited for security-critical tasks, especially in applications handling sensitive data where privacy risks often arise after access is granted. We present AgentCrypt, a framework that prioritizes privacy over correctness by addressing post-access leakage through tool calls, memory, and derived outputs. AgentCrypt introduces a three-tier architecture for fine-grained, privacy-preserving multi-agent workflows and provides formal security guarantees for tagged data. It integrates seamlessly with existing platforms, demonstrated through implementations with LangGraph and Google ADK, while remaining platform-agnostic.
An end-to-end encrypted application needs a mechanism for backing up secret keys. Existing deployed systems create a single point of privacy failure: by compromising one secure hardware device, an attacker can recover many users’ secrets. In this talk, I will describe two architectures for encrypted backups that split secrets across different system components. Both architectures are motivated by deployment constraints. First, I will present one system that splits secrets across different types of enclaves run by different cloud providers (SVR3, OSDI’24). Then, I will discuss another system that splits secrets across application clients and offloads work, but not secrets, to the application server (Chorus, IEEE S&P’26). This talk is based on joint work with Graeme Connell, Vivian Fang, Allison Li, Raluca Ada Popa, Deevashwer Rathee, and Rolfe Schmidt.