News
Image
A close-up image of a pair of eyes and the nose bridge between them, all composed out of rows of magenta ones and zeros of various sizes.

Recall November 6, 2024 — the day after the U.S. election. I was driving back to my home in Washington, DC, from Ohio with colleagues. I was...

Image
Venkat Guruswami

Greetings from Berkeley, where after a very busy summer of crypto and quantum fun, we’ve just kicked off our Fall 2025 programs on Complexity and...

News archive

Irit Dinur's journey through mathematics and computer science led her to become the first woman professor at the Institute for Advanced Study School of Mathematics.

In this early February talk, Sasha Rush (Cornell) delves into the transformative impact of DeepSeek on the landscape of large language models (LLMs).

Expanders are sparse and yet highly connected graphs. They appear in many areas of theory: pseudorandomness, error-correcting codes, graph algorithms, Markov chain mixing, and more. I want to tell you about two results, proven in the last few months, that constitute a phase transition in our understanding of expanders.

In this inaugural lecture of the Berkeley Neuro-AI Resilience Center, Daniel Jackson (MIT EECS), renowned author of Portraits of Resilience, shared his exploration of how resilience shapes human experience. 

In this episode of our Polylogues web series, Science Communicator in Residence Anil Ananthaswamy talks with machine learning researcher Andrew Gordon Wilson (NYU). The two discuss Wilson’s work connecting transformers, Kolmogorov complexity, the no free lunch theorem, and universal learners.

Greetings from Berkeley, where we are nearly halfway through the second semester of our yearlong research program on large language models. In early February, the program hosted a highly interdisciplinary workshop with talks covering a host of topics, including one by program organizer Sasha Rush on the then-just-released DeepSeek, which we’re featuring in our SimonsTV corner this month. In broader theory news, the 2025 STOC conference announced its list of accepted papers in early February; based on both the record number 200-plus papers and the significant leaps made across the spectrum of theoretical computer science, I’m delighted and proud that the field is alive and kicking!

Ten years ago, researchers proved that adding full memory can theoretically aid computation. They’re just now beginning to understand the implications.

In this episode of our Polylogues web series, Spring 2024 Science Communicator in Residence Ben Brubaker sits down with cryptographer and machine learning researcher Boaz Barak (Harvard and OpenAI). The two discuss Boaz’s path to theoretical computer science, the relationship between theory and practice in cryptography and AI, and specific topics in AI, including generality, alignment, worst- vs. average-case performance, and watermarking.

Greetings from Berkeley, where the spring semester is just getting underway. This term, we are hosting the second half of the Special Year on Large Language Models and Transformers. This program explores LLMs from a broad perspective, asking questions about their power, scalability, trustworthiness, safety, implications for research in the sciences, and the legal frameworks in which LLMs operate, as well as their impacts on society at large.

As part of the Simons Institute’s recent workshop on Unknown Futures of Generalization, we hosted a debate probing the unknown generalization limits of current LLMs. The debate was between Sébastien Bubeck (OpenAI), a coauthor of “Sparks of Artificial General Intelligence,” and Tom McCoy (Yale), a coauthor of “Embers of Autoregression,” and it addressed the motion “Current LLM scaling methodology is sufficient to generate new proof techniques needed to resolve major open mathematical conjectures such as P ≠ NP.”