
Greetings from Berkeley! We are gearing up for a busy wrap-up of the spring semester, with five back-to-back workshop weeks at the Simons Institute...

In this month’s newsletter, we’re highlighting a 2015 talk by Chris Umans on some of the then state-of-the-art approaches to bound the matrix...

Ten years ago, researchers proved that adding full memory can theoretically aid computation. They’re just now beginning to understand the implications...

The Simons Institute will be partnering with Canadian research consortium IVADO to enhance and expand the Spring 2025 research program, Special Year on Large Language Models and Transformers, Part 2.

Three researchers have figured out how to craft a proof that spreads out information while keeping it perfectly secret.

The search for the next director of the Simons Institute for the Theory of Computing opened on October 1, 2024. Applications are due November 11.

Greetings from Berkeley! I’m delighted to be writing my first update to you as interim director since Shafi stepped down in August after six-and-a-half years of visionary service.

Aditi Raghunathan (CMU)’s presentation in the Large Language Models and Transformers, Part 1 Boot Camp addresses the root causes of numerous safety concerns and wide-ranging attacks on current large language models.

In this talk from the Modern Paradigms in Generalization Boot Camp, John Duchi (Stanford) provides an overview of some of the history behind robust optimization, including modern machine learning via connections with different types of robustness.

Some results mark the end of a long quest, whereas others open up new worlds for exploration. Then there are works that change our perspective and make the familiar unfamiliar once more. I am delighted to tell you about some recent developments of all three kinds.

In this episode of our Polylogues web series, Summer 2023 science communicator in residence Lakshmi Chandrasekaran interviews former Simons Institute Scientific Advisory Board member and program organizer Irit Dinur. Their wide-ranging conversation touches on Irit’s career and research, the trajectory from basic science to practice, upcoming directions for the field, and gender distribution and climate in computer science.

As part of the recent workshop on Extroverted Sublinear Algorithms, Ronitt Rubinfeld (MIT) surveyed two directions in which sublinear-time algorithms are impacting the design and use of learning algorithms.

Can machines prove theorems? Can they have mathematical ideas? In this talk from our Theoretically Speaking public lecture series, Jordan Ellenberg (University of Wisconsin–Madison) spoke about his joint work with researchers from DeepMind (which used novel techniques in machine learning to make progress in a problem in combinatorics) and charted some near-term ways that machine learning may affect mathematical practice.