Skip to main content

Utility navigation

  • Calendar
  • Contact
  • Login
  • MAKE A GIFT
Berkeley University of California
Home Home

Main navigation

  • Programs & Events
    • Research Programs
    • Workshops & Symposia
    • Public Lectures
    • Research Pods
    • Internal Program Activities
    • Algorithms, Society, and the Law
  • Participate
    • Apply to Participate
    • Propose a Program
    • Postdoctoral Research Fellowships
    • Law and Society Fellowships
    • Science Communicator in Residence Program
    • Circles
    • Breakthroughs Workshops and Goldwasser Exploratory Workshops
  • People
    • Scientific Leadership
    • Staff
    • Current Long-Term Visitors
    • Research Fellows
    • Postdoctoral Researchers
    • Scientific Advisory Board
    • Governance Board
    • Affiliated Faculty
    • Science Communicators in Residence
    • Law and Society Fellows
    • Chancellor's Professors
  • News & Videos
    • News
    • Videos
  • Support for the Institute
    • Annual Fund
    • All Funders
    • Institutional Partnerships
  • For Visitors
    • Visitor Guide
    • Plan Your Visit
    • Location & Directions
    • Accessibility
    • Building Access
    • IT Guide
  • About

Results 511 - 520 of 23763

Workshop Talk
|
Jan. 20, 2026

Training Dynamics of Softmax Self-Attention: Global Convergence and Neural Scaling Laws

We study the training dynamics of gradient descent in a softmax self-attention layer trained to perform linear regression and obtain the first mathematically rigorous derivation of a neural scaling law for softmax self-attention. Our analysis proceeds in two steps. First, we show that in the infinite-data limit the regression problem solved by the self-attention layer is equivalent to a matrix factorization problem. Second, we exploit this connection to design a tuned variant of gradient descent which efficiently optimizes the original finite-data regression objective. Our new optimization algorithm features several innovations over standard gradient descent, including a preconditioner and regularizer which help avoid spurious stationary points, and a specific data-dependent initialization point which lies near the manifold of global minima with high probability. We show that when our algorithm is run on the empirical loss, it identifies parameters which are globally optimal for the population loss, up to a small additive error which quickly tends to zero as more data and compute are used to train the model. Remarkably, we show that self-attention is able to match the minimax-optimal statistical rate achieved by the ordinary least-squares estimator, despite the nonconvexity of the loss in the model parameters. Additionally, our new algorithm attains a fast geometric convergence rate instead of the slow power law rate which is empirically observed using standard gradient descent with random initialization.

Workshop Talk
|
Jan. 20, 2026

Data Uniformity Improves Training Efficiency and More, with a Convergence Framework Beyond the NTK Regime

Data selection plays a crucial role in data-driven decision-making, including in large language models (LLMs), and is typically task-dependent. Properties such as data quality and diversity have been extensively studied and are known to enhance model performance. However, it remains unclear whether there exist other quantitative and general principles of data selection that can consistently improve performance, especially for complicated tasks. In this paper, we demonstrate that selecting more uniformly distributed data can improve training efficiency while enhancing performance. Specifically, we establish that more uniform (less biased) distribution leads to a larger minimum pairwise distance between data points, denoted by $h_{\min}$, and prove that a smaller $h_{\min}$ can slow down the training dynamics of gradient descent (GD). Moreover, we theoretically show that the approximation error of neural networks decreases as $h_{\min}$ increases. Our analysis introduces a convergence framework for GD beyond the Neural Tangent Kernel (NTK) regime, applicable to a broad class of architectures, including transformers, without requiring Lipschitz smoothness. This framework further provides theoretical justification for the use of residual connection and function composition in deep neural architectures. In the end, we conduct comprehensive experiments for supervised fine-tuning across various settings, including different optimization strategies, model sizes, and training datasets. The results consistently demonstrate that selecting data by maximizing pairwise distance significantly accelerates training and achieves comparable or better performance in LLMs across diverse datasets.

Workshop Talk
|
Jan. 20, 2026

Talk By

Abstract not available.

Workshop Talk
|
Jan. 20, 2026

Learning from the Right Teacher in Knowledge Distillation

Knowledge distillation, where a "student" model learns from a "teacher" model, is a powerful technique for maximizing performance under limited resources. This talk presents our recent work on understanding the benefits of distilling from teacher models and how to select an appropriate one. In the first part, we show that learning from intermediate teacher checkpoints—a procedure we term “progressive distillation”—provably improves the student’s sample complexity, in contrast to prior work that has largely focused on generalization. We illustrate this through a case study on sparse parity, complemented by empirical results on PCFG and natural language tasks. In the second part, we present a score, "GRACE", for selecting an effective teacher from a pool of candidates. Experiments on GSM8K and MATH demonstrate that GRACE reliably identifies a highly compatible teacher for a given student, providing actionable insights for fine-grained distillation design choices. This talk is based on joint work with Abhishek Panigrahi, Sadhika Malladi, Andrej Risteski, Sham Kakade, and Surbhi Goel.

Workshop Talk
|
Jan. 20, 2026

Talk By

Abstract not available.

Workshop Talk
|
Jan. 20, 2026

Transformers can learn compositional function

Abstract not available.

Workshop
|
January 20, 2026, 9:00 am - January 23, 2026, 5:00 pm
Modern Paradigms in Generalization Reunion

This reunion workshop is for long-term participants in the program " Modern Paradigms in Generalization," held in the fall 2024 semester. It will provide an opportunity to meet old and new friends. Moreover, we hope that it will give everyone a chance to...

Alignment Problems in AI Governance

Rui-Jie Yew and Greg Demirchyan (Law and Society fellows, Simons Institute)

Thursday, February 5, 2026

3:30 p.m. – 4:30 p.m. PT

Calvin Lab Auditorium & livestream


Registration is required to attend in-person. Please fill out a registration for each attendee.

 









Questions? Contact Simons Events at simonsevents@berkeley.edu

Contact Information

Event
|
Feb. 5, 2026
Alignment Problems in AI Governance

In an event comprising short talks and dialogue, Simons Institute Law and Society Fellows Rui-Jie Yew and Greg Demirchyan will explore two challenges of alignment in AI governance. First, we currently lack a thorough understanding of AI models, making it...

Visitor Guide

Pagination

  • Previous page Previous
  • Page 50
  • Page 51
  • Current page 52
  • Page 53
  • Page 54
  • Next page Next
Home
The Simons Institute for the Theory of Computing is the world's leading venue for collaborative research in theoretical computer science.

Footer

  • Programs & Events
  • Participate
  • Workshops & Symposia
  • Contact Us
  • Calendar
  • Accessibility

Footer social media

  • Twitter
  • Facebook
  • Youtube
© 2013–2026 Simons Institute for the Theory of Computing. All Rights Reserved.
link to homepage

Main navigation

  • Programs & Events
    • Research Programs
    • Workshops & Symposia
    • Public Lectures
    • Research Pods
    • Internal Program Activities
    • Algorithms, Society, and the Law
  • Participate
    • Apply to Participate
    • Propose a Program
    • Postdoctoral Research Fellowships
    • Law and Society Fellowships
    • Science Communicator in Residence Program
    • Circles
    • Breakthroughs Workshops and Goldwasser Exploratory Workshops
  • People
    • Scientific Leadership
    • Staff
    • Current Long-Term Visitors
    • Research Fellows
    • Postdoctoral Researchers
    • Scientific Advisory Board
    • Governance Board
    • Affiliated Faculty
    • Science Communicators in Residence
    • Law and Society Fellows
    • Chancellor's Professors
  • News & Videos
    • News
    • Videos
  • Support for the Institute
    • Annual Fund
    • All Funders
    • Institutional Partnerships
  • For Visitors
    • Visitor Guide
    • Plan Your Visit
    • Location & Directions
    • Accessibility
    • Building Access
    • IT Guide
  • About

Utility navigation

  • Calendar
  • Contact
  • Login
  • MAKE A GIFT
link to homepage