Skip to main content

Utility navigation

  • Calendar
  • Contact
  • Login
  • MAKE A GIFT
Berkeley University of California
Home Home

Main navigation

  • Programs & Events
    • Research Programs
    • Workshops & Symposia
    • Public Lectures
    • Research Pods
    • Internal Program Activities
    • Algorithms, Society, and the Law
  • Participate
    • Apply to Participate
    • Propose a Program
    • Postdoctoral Research Fellowships
    • Law and Society Fellowships
    • Science Communicator in Residence Program
    • Circles
    • Breakthroughs Workshops and Goldwasser Exploratory Workshops
  • People
    • Scientific Leadership
    • Staff
    • Current Long-Term Visitors
    • Research Fellows
    • Postdoctoral Researchers
    • Scientific Advisory Board
    • Governance Board
    • Affiliated Faculty
    • Science Communicators in Residence
    • Law and Society Fellows
    • Chancellor's Professors
  • News & Videos
    • News
    • Videos
  • Support for the Institute
    • Annual Fund
    • All Funders
    • Institutional Partnerships
  • For Visitors
    • Visitor Guide
    • Plan Your Visit
    • Location & Directions
    • Accessibility
    • Building Access
    • IT Guide
  • About

Results 201 - 210 of 23737

Research Program
|
Fall 2026
Spectral Theory Beyond Graphs
People

Harry Zhou

Simons Institute Law and Society Fellowship Application

Personal information








Academic Information




Application materials


Questions? Contact Simons Institute Visitor Services at simonsvisitorservices@berkeley.edu.

Recommenders

Provide the names and email addresses of two recommenders. The Simons Institute will contact them via email for a letter of recommendation. 


Recommender #1





Recommender #2





Contact Information

Long-Term Visitor Application 

Contact Details








Contact Information

Workshop Talk
|
Feb. 27, 2026

Tight analyses of first-order methods with error feedback, in homogeneous and heterogeneous setups

Communication between agents often constitutes a major computational bottleneck in distributed learning. One of the most common mitigation strategies is to compress the information exchanged, thereby reducing communication overhead. To counteract the degradation in convergence associated with compressed communication, error feedback schemes -- most notably EF and EF21-- were introduced. In a series of works, we provide tight analysis of both of these methods. Specifically, we find the Lyapunov function that yields the best possible convergence rate for each method -- with matching lower bounds. This principled approach yields sharp performance guarantees and enables a rigorous, apples-to-apples comparison between EF, EF21, and compressed gradient descent. Our analysis is carried out in a variety of representative setting, which allows for clean theoretical insights and fair comparison of the underlying mechanisms.

Workshop Talk
|
Feb. 27, 2026

Are We Measuring the Right Thing? Distribution Shift Lessons for Federated Learning

Federated and collaborative learning methods proliferate, yet our understanding of when and why they work lags behind empirical results. A central challenge is heterogeneity: how do we characterize it, measure it, and design algorithms that handle it? Drawing on recent work in distribution shift, I argue that the field's treatment of "heterogeneity" as monolithic obscures critical distinctions between interpolation, adaptation, and generalization scenarios—each requiring different theoretical and algorithmic approaches.

I will present a taxonomy of data and algorithmic interventions for distribution shifts, and translate its implications for federated settings: When does model averaging perform safe interpolation vs. risky extrapolation across clients? What is the fundamental tradeoff between personalization and generalization, and are we optimizing for the wrong objective? Do current benchmarks measure worst-case heterogeneity, or just benign shifts? I will close with open problems at the intersection of measurement science and federated learning: how do we design benchmarks with construct validity and adapt evaluation frameworks to match real-world collaborative learning scenarios?

Image
Surbhi Goel
Surbhi Goel
(University of Pennsylvania)
Workshop Talk
|
Feb. 26, 2026

Jane Street Estimathon

Please RSVP here: https://docs.google.com/forms/d/e/1FAIpQLSdykNQoztvAZvoCTfkno1FCQvE1EQsHZ-d32-pqpO5bZjPx8w/viewform

 

Do you know how many computers were connected to the Internet on January 1, 1989? Or how many YouTube videos have more than 1 billion views? What about the bandwidth beneath the Atlantic Ocean?

If you want to solve problems like these, join us at our upcoming Estimathon — a team-based contest that combines trivia, game theory, and mathematical thinking. Participants will be placed in teams and will be tasked with solving 13 estimation problems in just 30 minutes.

After the Estimathon concludes, stick around for food and casual conversation with one of our Streeters to learn more about the firm’s work and opportunities.

Workshop Talk
|
Feb. 26, 2026

The Statistical Fairness-Accuracy Frontier

Machine learning models must balance accuracy and fairness, but these goals often conflict, particularly when data come from multiple demographic groups. A useful tool for understanding this trade-off is the fairness-accuracy (FA) frontier, which characterizes the set of models that cannot be simultaneously improved in both fairness and accuracy. Prior analyses of the FA frontier provide a full characterization under the assumption of complete knowledge of population distributions, an unrealistic ideal. We study the FA frontier in the finite-sample regime, showing how it deviates from its population counterpart and quantifying the worst-case gap between them. In particular, we derive minimax-optimal estimators that depend on the designer's knowledge of the covariate distribution. For each estimator, we characterize how finite-sample effects asymmetrically impact each group's risk, and identify optimal sample allocation strategies. Our results transform the FA frontier from a theoretical construct into a practical tool for policymakers and practitioners who must often design algorithms with limited data.
 

Workshop Talk
|
Feb. 26, 2026

Personalized Collaborative Learning with Affinity-Based Variance Reduction

Multi-agent learning faces a fundamental tension: leveraging distributed collaboration without sacrificing the personalization needed for diverse agents. This tension intensifies when aiming for full personalization while adapting to unknown heterogeneity levels—gaining collaborative speedup when agents are similar, without performance degradation when they are different. Embracing the challenge, we propose personalized collaborative learning (PCL), a novel framework for heterogeneous agents to collaboratively learn personalized solutions with seamless adaptivity. Through carefully designed bias correction and importance correction mechanisms, our method AffPCL robustly handles both environment and objective heterogeneity. We prove that AffPCL reduces sample complexity over independent learning by a factor of $\max\{n^{-1}, \delta\}$, where $n$ is the number of agents and $\delta\in[0,1]$ measures their heterogeneity. This *affinity-based* acceleration automatically interpolates between the linear speedup of federated learning in homogeneous settings and the baseline of independent learning, without requiring prior knowledge of the system. Our analysis further reveals that an agent may obtain linear speedup even by collaborating with arbitrarily dissimilar agents, unveiling new insights into personalization and collaboration in the high heterogeneity regime.

Pagination

  • Previous page Previous
  • Page 19
  • Page 20
  • Current page 21
  • Page 22
  • Page 23
  • Next page Next
Home
The Simons Institute for the Theory of Computing is the world's leading venue for collaborative research in theoretical computer science.

Footer

  • Programs & Events
  • Participate
  • Workshops & Symposia
  • Contact Us
  • Calendar
  • Accessibility

Footer social media

  • Twitter
  • Facebook
  • Youtube
© 2013–2026 Simons Institute for the Theory of Computing. All Rights Reserved.
link to homepage

Main navigation

  • Programs & Events
    • Research Programs
    • Workshops & Symposia
    • Public Lectures
    • Research Pods
    • Internal Program Activities
    • Algorithms, Society, and the Law
  • Participate
    • Apply to Participate
    • Propose a Program
    • Postdoctoral Research Fellowships
    • Law and Society Fellowships
    • Science Communicator in Residence Program
    • Circles
    • Breakthroughs Workshops and Goldwasser Exploratory Workshops
  • People
    • Scientific Leadership
    • Staff
    • Current Long-Term Visitors
    • Research Fellows
    • Postdoctoral Researchers
    • Scientific Advisory Board
    • Governance Board
    • Affiliated Faculty
    • Science Communicators in Residence
    • Law and Society Fellows
    • Chancellor's Professors
  • News & Videos
    • News
    • Videos
  • Support for the Institute
    • Annual Fund
    • All Funders
    • Institutional Partnerships
  • For Visitors
    • Visitor Guide
    • Plan Your Visit
    • Location & Directions
    • Accessibility
    • Building Access
    • IT Guide
  • About

Utility navigation

  • Calendar
  • Contact
  • Login
  • MAKE A GIFT
link to homepage