Results 511 - 520 of 23763
We study the training dynamics of gradient descent in a softmax self-attention layer trained to perform linear regression and obtain the first mathematically rigorous derivation of a neural scaling law for softmax self-attention. Our analysis proceeds in two steps. First, we show that in the infinite-data limit the regression problem solved by the self-attention layer is equivalent to a matrix factorization problem. Second, we exploit this connection to design a tuned variant of gradient descent which efficiently optimizes the original finite-data regression objective. Our new optimization algorithm features several innovations over standard gradient descent, including a preconditioner and regularizer which help avoid spurious stationary points, and a specific data-dependent initialization point which lies near the manifold of global minima with high probability. We show that when our algorithm is run on the empirical loss, it identifies parameters which are globally optimal for the population loss, up to a small additive error which quickly tends to zero as more data and compute are used to train the model. Remarkably, we show that self-attention is able to match the minimax-optimal statistical rate achieved by the ordinary least-squares estimator, despite the nonconvexity of the loss in the model parameters. Additionally, our new algorithm attains a fast geometric convergence rate instead of the slow power law rate which is empirically observed using standard gradient descent with random initialization.
Data selection plays a crucial role in data-driven decision-making, including in large language models (LLMs), and is typically task-dependent. Properties such as data quality and diversity have been extensively studied and are known to enhance model performance. However, it remains unclear whether there exist other quantitative and general principles of data selection that can consistently improve performance, especially for complicated tasks. In this paper, we demonstrate that selecting more uniformly distributed data can improve training efficiency while enhancing performance. Specifically, we establish that more uniform (less biased) distribution leads to a larger minimum pairwise distance between data points, denoted by $h_{\min}$, and prove that a smaller $h_{\min}$ can slow down the training dynamics of gradient descent (GD). Moreover, we theoretically show that the approximation error of neural networks decreases as $h_{\min}$ increases. Our analysis introduces a convergence framework for GD beyond the Neural Tangent Kernel (NTK) regime, applicable to a broad class of architectures, including transformers, without requiring Lipschitz smoothness. This framework further provides theoretical justification for the use of residual connection and function composition in deep neural architectures. In the end, we conduct comprehensive experiments for supervised fine-tuning across various settings, including different optimization strategies, model sizes, and training datasets. The results consistently demonstrate that selecting data by maximizing pairwise distance significantly accelerates training and achieves comparable or better performance in LLMs across diverse datasets.
Knowledge distillation, where a "student" model learns from a "teacher" model, is a powerful technique for maximizing performance under limited resources. This talk presents our recent work on understanding the benefits of distilling from teacher models and how to select an appropriate one. In the first part, we show that learning from intermediate teacher checkpoints—a procedure we term “progressive distillation”—provably improves the student’s sample complexity, in contrast to prior work that has largely focused on generalization. We illustrate this through a case study on sparse parity, complemented by empirical results on PCFG and natural language tasks. In the second part, we present a score, "GRACE", for selecting an effective teacher from a pool of candidates. Experiments on GSM8K and MATH demonstrate that GRACE reliably identifies a highly compatible teacher for a given student, providing actionable insights for fine-grained distillation design choices. This talk is based on joint work with Abhishek Panigrahi, Sadhika Malladi, Andrej Risteski, Sham Kakade, and Surbhi Goel.
This reunion workshop is for long-term participants in the program " Modern Paradigms in Generalization," held in the fall 2024 semester. It will provide an opportunity to meet old and new friends. Moreover, we hope that it will give everyone a chance to...
In an event comprising short talks and dialogue, Simons Institute Law and Society Fellows Rui-Jie Yew and Greg Demirchyan will explore two challenges of alignment in AI governance. First, we currently lack a thorough understanding of AI models, making it...