Skip to main content

Utility navigation

  • Calendar
  • Contact
  • Login
  • MAKE A GIFT
Berkeley University of California
Home Home

Main navigation

  • Programs & Events
    • Research Programs
    • Workshops & Symposia
    • Public Lectures
    • Research Pods
    • Internal Program Activities
    • Algorithms, Society, and the Law
  • Participate
    • Apply to Participate
    • Propose a Program
    • Postdoctoral Research Fellowships
    • Law and Society Fellowships
    • Science Communicator in Residence Program
    • Circles
    • Breakthroughs Workshops and Goldwasser Exploratory Workshops
  • People
    • Scientific Leadership
    • Staff
    • Current Long-Term Visitors
    • Research Fellows
    • Postdoctoral Researchers
    • Scientific Advisory Board
    • Governance Board
    • Affiliated Faculty
    • Science Communicators in Residence
    • Law and Society Fellows
    • Chancellor's Professors
  • News & Videos
    • News
    • Videos
  • Support for the Institute
    • Annual Fund
    • All Funders
    • Institutional Partnerships
  • For Visitors
    • Visitor Guide
    • Plan Your Visit
    • Location & Directions
    • Accessibility
    • Building Access
    • IT Guide
  • About

Results 121 - 130 of 23736

News
|
Mar. 18, 2026

Letter from the Director, March 2026

Greetings from Berkeley, where spring break is just around the corner, though we have been enjoying spring-like weather already.

News
|
Mar. 18, 2026

Alignment Problems in AI Governance

In an event comprising short talks and dialogue, Simons Institute Law and Society fellows Rui-Jie Yew and Greg Demirchyan explored key challenges of alignment in AI governance.

News
|
Mar. 18, 2026

Reconciling Biological and Social Research in Autism

In a distinguished lecture given as part of the Simons Institute’s recent workshop on Theory of Computing and Healthcare, Holden Thorp (editor in chief, Science) explored how public disagreements in the autism world belie a growing convergence of the views that autism is a biological condition subject to intervention, and a social difference that requires only acceptance.

News
|
Mar. 18, 2026

Toward Provably Private Federated Learning

In this talk from the Federated and Collaborative Learning Boot Camp, Daniel Ramage (Google) provided an overview of pioneering work at Google on federated learning, and discussed important open problems in the space.

Workshop Talk
|
Mar. 17, 2026

Privacy in Practice: Architecting Differential Privacy into Web Advertising Standards

I’ll present the privacy architecture we designed for W3C’s Attribution API, the privacy-preserving ad-measurement standard being developed with participation from all major browsers. The API replaces backend tracking with on-device measurement and differentially private aggregation, grounded in individual differential privacy (IDP), which enables strong per-user guarantees and critical data-dependent optimizations. I’ll describe two systems that make this architecture practical: Cookie Monster, which introduced efficient on-device IDP budgeting and formed the basis of the W3C draft; and Big Bird, which extends it with principled defenses against adversarial privacy-budget depletion and is now being incorporated into the standard. I’ll conclude with open research challenges and why increased engagement from the research community is essential for these standards to deliver real-world privacy.

Workshop Talk
|
Mar. 17, 2026

Synthetic Data as an Enabler for Learning from Decentralized, Private Data

Today, data sharing is the cornerstone of many modern applications. A common concern in such data sharing pipelines is privacy: organizations are responsible for protecting the privacy of their data, whether it represents user data or enterprise trade secrets. In this talk, we will discuss emerging challenges related to learning large machine learning models from private, federated data. Existing approaches (namely, differentially-private federated learning, DP-FL) involve training models on client devices and are difficult to scale to large models. We will explore the feasibility of replacing DP-FL with centralized training over differentially private synthetic data. We will show that finetuning a model on DP synthetic data can perform similarly to DP-FL in downstream model performance, with order(s)-of-magnitude lower communication and computation. We will also demonstrate conditions under which synthetic data is theoretically guaranteed to approach they underlying private data distribution.

Workshop Talk
|
Mar. 17, 2026

Private Insights into AI Use

Understanding real-world usage is critical for improving Generative AI, but analyzing this data risks exposing sensitive user inputs. While platforms use "privacy-aware" heuristics like PII redaction and clustering to mitigate this, are these protections actually secure? First, we put these claims to the test by introducing CLIOPATRA, the first successful privacy attack against Anthropic's CLIO. We demonstrate how an adversary can insert malicious chats to systematically bypass layered protections and leak sensitive data. Evaluated against synthetic medical chats, CLIOPATRA proves that knowing just basic demographics and a single symptom allows an attacker to extract a target’s full medical history up to 100% of the time—showing that ad-hoc, heuristic mitigations are fundamentally unreliable. If heuristics fail, how can developers safely extract insights? To answer this, we introduce Provably Private Insights (PPI), a novel framework that abandons heuristics in favor of mathematically guaranteed privacy. PPI bridges the gap between raw data and analytics by integrating Trusted Execution Environments (TEEs) for external verifiability, "Data Expert" LLMs operating within secure enclaves, and Differential Privacy (DP) for anonymous aggregation. By walking through PPI’s open-source architecture and its real-world deployment in Google's Android Recorder app, this talk demonstrates the practicality of provably private AI analytics at scale.

Workshop Talk
|
Mar. 17, 2026

Contextual Privacy in the Agentic Era (Virtual Talk)

We examine how privacy must evolve as AI systems become increasingly agentic and operate across diverse tasks, tools, and information flows. I will highlight two research directions: using reinforcement learning to instill contextual integrity in agents, and using agentic systems to synthesize new privacy-preserving algorithms on the fly. Together, these point toward a scalable view of privacy that is both context-sensitive at deployment time and adaptive at the level of mechanism design.

Workshop Talk
|
Mar. 17, 2026

VaultGemma: A Differentially Private Gemma Model

We introduce VaultGemma 1B, a 1 billion parameter model within the Gemma family, fully trained with differential privacy. Pretrained on the identical data mixture used for the Gemma 2 series, VaultGemma 1B represents a significant step forward in privacy-preserving large language models. We openly release this model to the community.

Workshop Talk
|
Mar. 17, 2026

2026 Is the New 2016, but Make It Privacy: On Federated Memory, Contextual Privacy, and Personalized Agents

As LLMs evolve into persistent personal agents—managing calendars, emails, and health records across sessions—they accumulate rich user memories that enable powerful personalization but create new privacy risks. The recent explosion of tools like OpenClaw, where tens of thousands of always-on AI agents were deployed with full access to users' messages, credentials, and conversation histories, makes these risks concrete and urgent. What should an agent remember, and who should it tell? In this talk, we explore both sides of this question. First, through CIMemories [ICLR 2026], we introduce a compositional benchmark for evaluating whether LLMs respect contextual integrity when drawing on persistent memory. Our evaluation reveals that frontier models exhibit up to 69% attribute-level violations, leaking sensitive information in inappropriate contexts, and that these violations accumulate unpredictably across tasks and runs—exposing fundamental instability in how models reason about context-dependent disclosure. We then ask: can we architect systems that avoid this trade-off entirely? In PPMI, we present a hybrid framework that decomposes tasks between a powerful but untrusted remote LLM and a trusted local model, using Socratic chain-of-thought reasoning and homomorphically encrypted vector search over private data. Our approach, pairing GPT-4o with a local Llama-3.2-1B, outperforms GPT-4o alone on long-context QA—demonstrating that privacy and utility need not be at odds. We conclude by arguing that these failures are not bugs that scale will fix: they reflect a missing notion of contextual norms in model training and architecture. As agents gain persistent memory and autonomy, the line between personalization and surveillance thins—making principled privacy reasoning not just a feature, but a prerequisite for trustworthy AI.

Pagination

  • Previous page Previous
  • Page 11
  • Page 12
  • Current page 13
  • Page 14
  • Page 15
  • Next page Next
Home
The Simons Institute for the Theory of Computing is the world's leading venue for collaborative research in theoretical computer science.

Footer

  • Programs & Events
  • Participate
  • Workshops & Symposia
  • Contact Us
  • Calendar
  • Accessibility

Footer social media

  • Twitter
  • Facebook
  • Youtube
© 2013–2026 Simons Institute for the Theory of Computing. All Rights Reserved.
link to homepage

Main navigation

  • Programs & Events
    • Research Programs
    • Workshops & Symposia
    • Public Lectures
    • Research Pods
    • Internal Program Activities
    • Algorithms, Society, and the Law
  • Participate
    • Apply to Participate
    • Propose a Program
    • Postdoctoral Research Fellowships
    • Law and Society Fellowships
    • Science Communicator in Residence Program
    • Circles
    • Breakthroughs Workshops and Goldwasser Exploratory Workshops
  • People
    • Scientific Leadership
    • Staff
    • Current Long-Term Visitors
    • Research Fellows
    • Postdoctoral Researchers
    • Scientific Advisory Board
    • Governance Board
    • Affiliated Faculty
    • Science Communicators in Residence
    • Law and Society Fellows
    • Chancellor's Professors
  • News & Videos
    • News
    • Videos
  • Support for the Institute
    • Annual Fund
    • All Funders
    • Institutional Partnerships
  • For Visitors
    • Visitor Guide
    • Plan Your Visit
    • Location & Directions
    • Accessibility
    • Building Access
    • IT Guide
  • About

Utility navigation

  • Calendar
  • Contact
  • Login
  • MAKE A GIFT
link to homepage