News
Image
Venkat Wider Aspect Ratio

Greetings from Berkeley, where we’ve recently welcomed a merry band of cryptographers for what promises to be an outstanding summer program on...

Image
Ewin Tang

We’re delighted to share that Miller fellow and Simons Institute Quantum Pod postdoc Ewin Tang has been awarded the 2025 Maryam Mirzakhani New...

Image

This month, we held a joint workshop with SLMath on AI for Mathematics and Theoretical Computer Science. It was unlike any other Simons Institute...

News archive

In this deliberately provocative two-part talk from the recent workshop on Theoretical Aspects of Trustworthy AI, Somesh Jha (University of Wisconsin) makes a case for applying a security and cryptography mindset to evaluating the trustworthiness of machine learning systems, particularly in adversarial and privacy-sensitive contexts.

AI for mathematics (AI4Math) is intellectually intriguing and crucial for AI-driven system design and verification. Formal mathematical reasoning is grounded in formal systems such as Lean, which can verify the correctness of reasoning and provide automatic feedback. This talk by Kaiyu Yang (Meta) from the Simons Institute and SLMath joint workshop on AI for Math and TCS introduces the basics of formal mathematical reasoning, focusing on two central tasks: theorem proving (generating formal proofs given theorem statements) and autoformalization (translating from informal to formal).

Decision problems about infinite groups are typically undecidable, but many are semidecidable if given an oracle for the word problem. One such problem is whether a group is a counterexample to the Kaplansky unit conjecture for group rings. In this talk from the workshop AI for Mathematics and Theoretical Computer Science, Giles Gardam (University of Bonn) presents the mathematical context and content of the unit conjecture, and explains how viewing the problem as an instance of the Boolean satisfiability problem (SAT) and applying SAT solvers show that it is not just solvable in theory but also in practice.

Greetings from Berkeley, where we’ve recently welcomed a merry band of cryptographers for what promises to be an outstanding summer program on cryptography. Building on the success of the Simons Institute’s 2015 summer crypto program, legendary for its influence on the field and participants’ careers, Cryptography 10 Years Later: Obfuscation, Proof Systems, and Secure Computation promises to be even bigger and better.

We’re delighted to share that Miller fellow and Simons Institute Quantum Pod postdoc Ewin Tang has been awarded the 2025 Maryam Mirzakhani New Frontiers Prize for “developing classical analogs of quantum algorithms for machine learning and linear algebra, and for advances in quantum machine learning on quantum data.”

This month, we held a joint workshop with SLMath on AI for Mathematics and Theoretical Computer Science. It was unlike any other Simons Institute workshop I have been to. Over half the participants were mathematicians. But what really set it apart was its afternoons of hands-on tinkering. After lunch on the first three days, participants received a worksheet from the organizers. We opened up our laptops in the Calvin Lab auditorium and did the exercises side by side, with a fleet of TAs among us.

Greetings from the Simons Institute, where we are in the final week of a yearlong research program on Large Language Models and Transformers. 

On April 10, Simons Institute Science Communicator in Residence Anil Ananthaswamy sat down with Sasha Rush, an associate professor at Cornell Tech working on natural language processing and machine learning, with a focus on deep learning text generation, language modeling, and structured prediction. This episode of Polylogues explores a significant shift in the last year in how large language models are trained and used. 

The leading AI companies are increasingly focused on building generalist AI agents — systems that can autonomously plan, act, and pursue goals across almost all tasks that humans can perform. Despite how useful these systems might be, unchecked AI agency poses significant risks to public safety and security, ranging from misuse by malicious actors to a potentially irreversible loss of human control. In his Richard M. Karp Distinguished Lecture this month, Yoshua Bengio (IVADO / Mila / Université de Montréal) discussed how these risks arise from current AI training methods.

In March, the Simons Institute hosted a Workshop on Quantum Memories. This specialized workshop explored recent progress around robust quantum information storage in physical systems. We’re delighted to share one of our favorite talks from the workshop: “A Local Automaton for the 2D Toric Code,” presented by Shankar Balasubramanian (MIT).