News
Image
Ewin Tang

We’re delighted to share that Miller fellow and Simons Institute Quantum Pod postdoc Ewin Tang has been awarded the 2025 Maryam Mirzakhani New...

Image
Venkat Wider Aspect Ratio

Greetings from the Simons Institute, where we are in the final week of a yearlong research program on Large Language Models and Transformers. 

Image

This month, we held a joint workshop with SLMath on AI for Mathematics and Theoretical Computer Science. It was unlike any other Simons Institute...

News archive

Aditi Raghunathan (CMU)’s presentation in the Large Language Models and Transformers, Part 1 Boot Camp addresses the root causes of numerous safety concerns and wide-ranging attacks on current large language models.

In this talk from the Modern Paradigms in Generalization Boot Camp, John Duchi (Stanford) provides an overview of some of the history behind robust optimization, including modern machine learning via connections with different types of robustness.

Some results mark the end of a long quest, whereas others open up new worlds for exploration. Then there are works that change our perspective and make the familiar unfamiliar once more. I am delighted to tell you about some recent developments of all three kinds.

In this episode of our Polylogues web series, Summer 2023 science communicator in residence Lakshmi Chandrasekaran interviews former Simons Institute Scientific Advisory Board member and program organizer Irit Dinur. Their wide-ranging conversation touches on Irit’s career and research, the trajectory from basic science to practice, upcoming directions for the field, and gender distribution and climate in computer science.

As part of the recent workshop on Extroverted Sublinear Algorithms, Ronitt Rubinfeld (MIT) surveyed two directions in which sublinear-time algorithms are impacting the design and use of learning algorithms.

Can machines prove theorems? Can they have mathematical ideas? In this talk from our Theoretically Speaking public lecture series, Jordan Ellenberg (University of Wisconsin–Madison) spoke about his joint work with researchers from DeepMind (which used novel techniques in machine learning to make progress in a problem in combinatorics) and charted some near-term ways that machine learning may affect mathematical practice.

This will be my final letter to you as director of the Simons Institute for the Theory of Computing, as my six-and-a-half-year term ends at the end of this month. These were years well lived. Both the Institute and I experienced challenge, growth, and ultimately a leap into the future. I am proud to have served.

On July 1, Sampath Kannan became the Simons Institute’s new associate director. A UC Berkeley alumnus (PhD 1989), Sampath is the Henry Salvatori Professor in the Department of Computer and Information Science at the University of Pennsylvania. We sat down with him to discuss his research interests, vision for his new role, and perspectives on how developments in AI are transforming the field of theoretical computer science.

For some time, I’ve argued that a common conception of AI is misguided. This is the idea that AI systems like large language and vision models are individual intelligent agents, analogous to human agents. Instead, I’ve argued that these models are “cultural technologies” like writing, print, pictures, libraries, internet search engines, and Wikipedia.

We are heartbroken by the loss of Luca Trevisan, who served as senior scientist at the Institute from 2014 to 2019.