News
Image
A close-up image of a pair of eyes and the nose bridge between them, all composed out of rows of magenta ones and zeros of various sizes.

Recall November 6, 2024 — the day after the U.S. election. I was driving back to my home in Washington, DC, from Ohio with colleagues. I was...

Image
CETI Whale image

This July, the Simons Institute co-hosted, in collaboration with Project CETI (Cetacean Translation Initiative) and Oceankind, the fourth annual...

News archive

Let me start with a confession. For many years, I was afraid of quantum and crypto. Quantum scared me because I didn’t know how to think about tensor products, and crypto because I couldn’t keep track of the quantifiers involved in interactive protocols. Yet, I can’t resist telling you about the compressed oracle method (and its generalization, the path-recording oracle), a beautiful linear algebraic technique that led to fundamental discoveries in quantum cryptography and complexity over the past year and a half, apparently right outside my office in Calvin Lab.

Greetings from Berkeley, where after a very busy summer of crypto and quantum fun, we’ve just kicked off our Fall 2025 programs on Complexity and Linear Algebra, and on Algorithmic Foundations for Emerging Computing Technologies.

Sum-of-squares spectral amplification (SOSSA) is a new method for compiling efficient block encodings that exploits the low energy of the initial state and relies on sum-of-squares optimization. This talk by Caltech graduate student Robbie King describes the ideas behind the new technique, and in particular how sum-of-squares optimization connects to Hamiltonian simulation and phase estimation.

SP1 Hypercube is a new multilinear-based proof system for proving the correctness of programs written in a high-level programming language. In his recent talk in the Summer 2025 Cryptography program workshop on Proofs, Ron Rothblum (Succinct) gave an overview of how such real-world proof systems work, while focusing on a key novel component in Hypercube: the jagged polynomial commitment scheme.

In Spring 2025, the Simons Institute hosted a workshop on LLMs, Cognitive Science, Linguistics, and Neuroscience. In this episode of Polylogues, Spring 2025 Science Communicator in Residence Christoph Drösser sits down with one of the workshop’s organizers and presenters, psychology and neuroscience professor Steven Piantadosi (UC Berkeley).

During her talk at the Simons Institute’s workshop on The Future of Language Models and Transformers, Azalia Mirhoseini of Stanford University and Google DeepMind suggested that even small LLMs might “know” more than is obvious at first and can be made to answer questions correctly given enough compute. This theme — about LLMs and the knowledge they contain — played out in other talks in the same workshop, with speakers arguing that LLMs not only know, but also know that they know — an ability that can loosely be called metacognition.

Flip a coin — you get a 0 or 1. Flip 50 coins — you get 50 random bits. Flip 50 coins 50 times — you get … an error-correcting code. Or so said Claude Shannon, who came up with the concept in his seminal 1948 paper, “A Mathematical Theory of Communication.” An error-correcting code is an algorithm that enables the transmission of data in such a way that errors can be detected and corrected. Before Shannon, scientists assumed that the problem of noise could never be overcome in an unreliable communication channel. Shannon showed that this assumption was wrong.

Greetings from Berkeley, where we are in the final weeks of an exciting summer at the Simons Institute. Our Summer Cluster on Quantum Computing wound down a few weeks back after a period of intense activities. And our summer program on Cryptography has been continually abuzz with activity.

In this deliberately provocative two-part talk from the recent workshop on Theoretical Aspects of Trustworthy AI, Somesh Jha (University of Wisconsin) makes a case for applying a security and cryptography mindset to evaluating the trustworthiness of machine learning systems, particularly in adversarial and privacy-sensitive contexts.

AI for mathematics (AI4Math) is intellectually intriguing and crucial for AI-driven system design and verification. Formal mathematical reasoning is grounded in formal systems such as Lean, which can verify the correctness of reasoning and provide automatic feedback. This talk by Kaiyu Yang (Meta) from the Simons Institute and SLMath joint workshop on AI for Math and TCS introduces the basics of formal mathematical reasoning, focusing on two central tasks: theorem proving (generating formal proofs given theorem statements) and autoformalization (translating from informal to formal).