News
Image

This month, we held a joint workshop with SLMath on AI for Mathematics and Theoretical Computer Science. It was unlike any other Simons Institute...

Image
Ewin Tang

We’re delighted to share that Miller fellow and Simons Institute Quantum Pod postdoc Ewin Tang has been awarded the 2025 Maryam Mirzakhani New...

Image
Venkat Wider Aspect Ratio

Greetings from the Simons Institute, where we are in the final week of a yearlong research program on Large Language Models and Transformers. 

News archive

We’re delighted to share that Miller fellow and Simons Institute Quantum Pod postdoc Ewin Tang has been awarded the 2025 Maryam Mirzakhani New Frontiers Prize for “developing classical analogs of quantum algorithms for machine learning and linear algebra, and for advances in quantum machine learning on quantum data.”

This month, we held a joint workshop with SLMath on AI for Mathematics and Theoretical Computer Science. It was unlike any other Simons Institute workshop I have been to. Over half the participants were mathematicians. But what really set it apart was its afternoons of hands-on tinkering. After lunch on the first three days, participants received a worksheet from the organizers. We opened up our laptops in the Calvin Lab auditorium and did the exercises side by side, with a fleet of TAs among us.

Greetings from the Simons Institute, where we are in the final week of a yearlong research program on Large Language Models and Transformers. 

On April 10, Simons Institute Science Communicator in Residence Anil Ananthaswamy sat down with Sasha Rush, an associate professor at Cornell Tech working on natural language processing and machine learning, with a focus on deep learning text generation, language modeling, and structured prediction. This episode of Polylogues explores a significant shift in the last year in how large language models are trained and used. 

The leading AI companies are increasingly focused on building generalist AI agents — systems that can autonomously plan, act, and pursue goals across almost all tasks that humans can perform. Despite how useful these systems might be, unchecked AI agency poses significant risks to public safety and security, ranging from misuse by malicious actors to a potentially irreversible loss of human control. In his Richard M. Karp Distinguished Lecture this month, Yoshua Bengio (IVADO / Mila / Université de Montréal) discussed how these risks arise from current AI training methods.

In March, the Simons Institute hosted a Workshop on Quantum Memories. This specialized workshop explored recent progress around robust quantum information storage in physical systems. We’re delighted to share one of our favorite talks from the workshop: “A Local Automaton for the 2D Toric Code,” presented by Shankar Balasubramanian (MIT).

In this month’s newsletter, we’re highlighting a 2015 talk by Chris Umans on some of the then state-of-the-art approaches to bound the matrix multiplication exponent, an evergreen fundamental topic that will be the focus of one of our upcoming program workshops in October 2025.

Greetings from Berkeley! We are gearing up for a busy wrap-up of the spring semester, with five back-to-back workshop weeks at the Simons Institute. And after a brief breather during which we will execute a planned upgrade of our auditorium’s A/V system, we will resume in mid-May for a bustling summer featuring a Cryptography program and a Quantum Computing summer cluster.

Irit Dinur's journey through mathematics and computer science led her to become the first woman professor at the Institute for Advanced Study School of Mathematics.

In this early February talk, Sasha Rush (Cornell) delves into the transformative impact of DeepSeek on the landscape of large language models (LLMs).