Skip to main content

Utility navigation

  • Calendar
  • Contact
  • Login
  • MAKE A GIFT
Berkeley University of California
Home Home

Main navigation

  • Programs & Events
    • Research Programs
    • Workshops & Symposia
    • Public Lectures
    • Research Pods
    • Internal Program Activities
    • Algorithms, Society, and the Law
  • Participate
    • Apply to Participate
    • Propose a Program
    • Postdoctoral Research Fellowships
    • Law and Society Fellowships
    • Science Communicator in Residence Program
    • Circles
    • Breakthroughs Workshops and Goldwasser Exploratory Workshops
  • People
    • Scientific Leadership
    • Staff
    • Current Long-Term Visitors
    • Research Fellows
    • Postdoctoral Researchers
    • Scientific Advisory Board
    • Governance Board
    • Affiliated Faculty
    • Science Communicators in Residence
    • Law and Society Fellows
    • Chancellor's Professors
  • News & Videos
    • News
    • Videos
  • Support for the Institute
    • Annual Fund
    • All Funders
    • Institutional Partnerships
  • For Visitors
    • Visitor Guide
    • Plan Your Visit
    • Location & Directions
    • Accessibility
    • Building Access
    • IT Guide
  • About

Results 221 - 230 of 23737

Video
|
Feb. 25, 2026
Garbage In, Medicine Out: Building and Evaluating AI Systems for Clinical Care
Video
|
Feb. 25, 2026
Personalized Collaborative Learning with Affinity-Based Variance Reduction
Video
|
Feb. 25, 2026
Learning from multiple modalities, Predicting on unseen tasks
Video
|
Feb. 25, 2026
A Complex Picture of Multi-task Learning
Workshop Talk
|
Feb. 25, 2026

FlexOlmo: Open Language Models for Flexible Data Use

Large language models are often limited by data, especially when valuable datasets are distributed across institutions or cannot be shared. We introduce FlexOlmo, a new class of Mixture-of-Experts (MoE) models designed for flexible, modular data use. In FlexOlmo, expert modules are trained independently on separate datasets and later merged seamlessly into a single model. This enables distributed training without data sharing, supports the use of closed datasets, and allows data to be opt-in or opt-out at inference time. We scale FlexOlmo to 37B parameters (20B active) and evaluate on 31 diverse downstream tasks. FlexOlmo significantly outperforms models trained on public data only and approaches the performance of an upper-bound model trained on all datasets. By enabling modular integration of closed data while respecting data ownership and control, FlexOlmo offers a practical path toward collaborative, continuous model development.

Workshop Talk
|
Feb. 25, 2026

Federated Learning in the Generative AI Era

Large language models (LLMs) have not yet effectively leveraged the vast amounts of data available on edge devices. Federated learning (FL) offers a promising way to collaboratively fine-tune LLMs without transferring private edge data to the cloud. To work within the computation and communication constraints of edge devices, recent research on federated fine-tuning of LLMs uses low-rank adaptation (LoRA) and similar parameter-efficient methods. LoRA-based methods suffer from accuracy loss in FL settings, primarily due to data and computational heterogeneity across clients. In this talk, I will first discuss an adaptive multi-head LoRA method that balances parameter efficiency and model expressivity by reparameterizing weight updates as the sum of multiple LoRA heads. In the second part of my talk, I will discuss other ways to leverage edge data, such as one-shot merging of locally trained models or training query routers personalized to each client's edge data.

Workshop Talk
|
Feb. 25, 2026

Exploiting Similarity in Federated Learning

We provide a brief introduction to local update methods developed for federated optimization and discuss their worst-case complexity. Surprisingly, these methods often perform much better in practice than predicted by theoretical analyses using classical assumptions. Recent years have revealed that their performance can be better described using refined notions that capture the similarity among client objectives. In this talk, we introduce a generic framework based on a distributed proximal point algorithm, which consolidates many of our insights and allows for the adaptation of arbitrary centralized optimization algorithms to the convex federated setting, including accelerated variants. Our theoretical analysis shows that the derived methods enjoy faster convergence when the degree of similarity among clients is high. 
Based on joint work with Xiaowen Jiang and Anton Rodomanov.

Image
Cliff Stein
Cliff Stein
(Columbia University)
Image
Kirk Pruhs
Kirk Pruhs
(University of Pittsburgh)
Image
Pierre-Emmanuel Gaillardon
(University of Utah)

Pagination

  • Previous page Previous
  • Page 21
  • Page 22
  • Current page 23
  • Page 24
  • Page 25
  • Next page Next
Home
The Simons Institute for the Theory of Computing is the world's leading venue for collaborative research in theoretical computer science.

Footer

  • Programs & Events
  • Participate
  • Workshops & Symposia
  • Contact Us
  • Calendar
  • Accessibility

Footer social media

  • Twitter
  • Facebook
  • Youtube
© 2013–2026 Simons Institute for the Theory of Computing. All Rights Reserved.
link to homepage

Main navigation

  • Programs & Events
    • Research Programs
    • Workshops & Symposia
    • Public Lectures
    • Research Pods
    • Internal Program Activities
    • Algorithms, Society, and the Law
  • Participate
    • Apply to Participate
    • Propose a Program
    • Postdoctoral Research Fellowships
    • Law and Society Fellowships
    • Science Communicator in Residence Program
    • Circles
    • Breakthroughs Workshops and Goldwasser Exploratory Workshops
  • People
    • Scientific Leadership
    • Staff
    • Current Long-Term Visitors
    • Research Fellows
    • Postdoctoral Researchers
    • Scientific Advisory Board
    • Governance Board
    • Affiliated Faculty
    • Science Communicators in Residence
    • Law and Society Fellows
    • Chancellor's Professors
  • News & Videos
    • News
    • Videos
  • Support for the Institute
    • Annual Fund
    • All Funders
    • Institutional Partnerships
  • For Visitors
    • Visitor Guide
    • Plan Your Visit
    • Location & Directions
    • Accessibility
    • Building Access
    • IT Guide
  • About

Utility navigation

  • Calendar
  • Contact
  • Login
  • MAKE A GIFT
link to homepage