Skip to main content

Utility navigation

  • Calendar
  • Contact
  • Login
  • MAKE A GIFT
Berkeley University of California
Home Home

Main navigation

  • Programs & Events
    • Research Programs
    • Workshops & Symposia
    • Public Lectures
    • Research Pods
    • Internal Program Activities
    • Algorithms, Society, and the Law
  • Participate
    • Apply to Participate
    • Propose a Program
    • Postdoctoral Research Fellowships
    • Law and Society Fellowships
    • Science Communicator in Residence Program
    • Circles
    • Breakthroughs Workshops and Goldwasser Exploratory Workshops
  • People
    • Scientific Leadership
    • Staff
    • Current Long-Term Visitors
    • Research Fellows
    • Postdoctoral Researchers
    • Scientific Advisory Board
    • Governance Board
    • Affiliated Faculty
    • Science Communicators in Residence
    • Law and Society Fellows
    • Chancellor's Professors
  • News & Videos
    • News
    • Videos
  • Support for the Institute
    • Annual Fund
    • All Funders
    • Institutional Partnerships
  • For Visitors
    • Visitor Guide
    • Plan Your Visit
    • Location & Directions
    • Accessibility
    • Building Access
    • IT Guide
  • About

Results 11 - 20 of 23765

People

Steven Luo

Governance at the Technological Frontier: Translating Research into Policy for AI Oversight



Thursday, April 30, 2026
10:00 a.m - 5:00 p.m.

Calvin Lab Auditorium & livestream


Registration is required to attend in-person. Please fill out a registration for each attendee.

 









Questions? Contact Simons Events at simonsevents@berkeley.edu

Contact Information

Video
|
Apr. 10, 2026
Talk by Elad Hazan (Princeton University)
Video
|
Apr. 10, 2026
Talk by Elad Hazan (Princeton University)
Workshop
|
April 13, 2026, 9:00 am - April 17, 2026, 5:00 pm
Agency in Collaborative Learning

This workshop will consider developing and facilitating collaborative learning systems to reflect desired social and economic principles. Analyzing and addressing such concerns is paramount for the ongoing success of collaborative learning and for the...

Workshop Talk
|
Apr. 13, 2026

Efficient Retrieval of Influential LLM Training Examples

Attributing LLM outputs to the training examples that causally influence their behavior can give us visibility into LLMs’ opaque reasoning and help us understand subtle persona changes. Unfortunately, finding training data attribution algorithms which are both accurate and scalable has remained an elusive goal. I argue for separately studying an Estimation Problem (accurately estimating the causal effect of a training example) and a Retrieval Problem (efficiently finding the highest-scoring training examples). I then present a generic retrieval method for influential sequences which can be paired with a wide range of influence estimators (including EKFAC) and for which one can obtain high confidence about recall. I discuss how causal training data attribution can be used as a tool to assure LLM alignment.

Workshop Talk
|
Apr. 13, 2026

The Agentic Data Science Multiverse

Empirical conclusions depend not only on data but on analytic decisions made throughout the research process. Many-analyst studies have quantified this: independent teams testing the same hypothesis on the same dataset often reach conflicting conclusions. But such studies require costly coordination and are rarely conducted. We show that fully autonomous AI analysts built on large language models (LLMs) can cheaply and at scale replicate this structured analytic diversity. In our framework, each AI analyst executes a complete analysis pipeline on a fixed dataset and hypothesis, while a separate AI auditor screens runs for methodological validity. Across three datasets, AI-generated analyses exhibit substantial dispersion in effect sizes, p-values, and conclusions, driven by systematic differences in preprocessing, model specification, and inference across LLMs and personas. Critically, outcomes are steerable: changing the analyst persona or model shifts the distribution of results even among valid analyses.
These findings highlight a central challenge for AI-automated empirical science: when defensible analyses are cheap, evidence becomes abundant and vulnerable to selective reporting. But the same capability suggests a solution: treating results as distributions makes analytic uncertainty visible, and deploying AI analysts on a fixed specification can reveal disagreement from underspecified choices. We therefore argue for new transparency norms: multiverse-style reporting and prompt disclosure, alongside code and data.

Joint work with Martin Bertran and Riccardo Fogliato

Workshop Talk
|
Apr. 9, 2026

Talk By

Abstract not available.

Workshop Talk
|
Apr. 13, 2026

Power and Limitations of Aggregation in Compound AI Systems

When designing compound AI systems, a common approach is to query multiple copies of the same model and aggregate the responses to produce a synthesized output. Given the homogeneity of these models, this raises the question of whether aggregation unlocks access to a greater set of outputs than querying a single model. In this talk, we investigate the power and limitations of aggregation within a stylized principal-agent framework. This framework models how the system designer can partially steer each agent's output through its reward function specification, but still faces limitations due to prompt engineering ability and model capabilities. Our analysis uncovers three natural mechanisms -- feasibility expansion, support expansion, and binding set contraction -- through which aggregation expands the set of outputs that are elicitable by the system designer. We prove that any aggregation operation must implement one of these mechanisms in order to be elicitability-expanding, and that strengthened versions of these mechanisms provide necessary and sufficient conditions that fully characterize elicitability-expansion. Finally, we provide an empirical illustration of our findings for LLMs deployed in a toy reference-generation task. Altogether, our results take a step towards characterizing when compound AI systems can overcome limitations in model capabilities and in prompt engineering.

Based on joint work with Nivasini Ananthakrishnan.

Workshop Talk
|
Apr. 9, 2026

Talk By

Abstract not available.

Pagination

  • Previous page Previous
  • Page 1
  • Current page 2
  • Page 3
  • Page 4
  • Page 5
  • Next page Next
Home
The Simons Institute for the Theory of Computing is the world's leading venue for collaborative research in theoretical computer science.

Footer

  • Programs & Events
  • Participate
  • Workshops & Symposia
  • Contact Us
  • Calendar
  • Accessibility

Footer social media

  • Twitter
  • Facebook
  • Youtube
© 2013–2026 Simons Institute for the Theory of Computing. All Rights Reserved.
link to homepage

Main navigation

  • Programs & Events
    • Research Programs
    • Workshops & Symposia
    • Public Lectures
    • Research Pods
    • Internal Program Activities
    • Algorithms, Society, and the Law
  • Participate
    • Apply to Participate
    • Propose a Program
    • Postdoctoral Research Fellowships
    • Law and Society Fellowships
    • Science Communicator in Residence Program
    • Circles
    • Breakthroughs Workshops and Goldwasser Exploratory Workshops
  • People
    • Scientific Leadership
    • Staff
    • Current Long-Term Visitors
    • Research Fellows
    • Postdoctoral Researchers
    • Scientific Advisory Board
    • Governance Board
    • Affiliated Faculty
    • Science Communicators in Residence
    • Law and Society Fellows
    • Chancellor's Professors
  • News & Videos
    • News
    • Videos
  • Support for the Institute
    • Annual Fund
    • All Funders
    • Institutional Partnerships
  • For Visitors
    • Visitor Guide
    • Plan Your Visit
    • Location & Directions
    • Accessibility
    • Building Access
    • IT Guide
  • About

Utility navigation

  • Calendar
  • Contact
  • Login
  • MAKE A GIFT
link to homepage