News
Image
Venkat Guruswami

Happy New Year from Berkeley, where the magnolias are already in bud and we have just welcomed the participants in our Spring 2026 research program on...

Image
Logo for the Spring 2025 program "Special Year on Large Language Models and Transformers, Part 2." The logo shows a semicircular half of a brain paired with a semicircular circuit board to form a circle. The background is lavender.

Large language models (LLMs) gain their encyclopedic knowledge and conversational tact by learning from an entire internet’s worth of human-generated...

Image
Venkat Guruswami

Season’s greetings from Berkeley, where we have donned our light jackets and the campus squirrels are newly plump for winter. We just concluded a...

News archive

Greetings from Berkeley, where after a very busy summer of crypto and quantum fun, we’ve just kicked off our Fall 2025 programs on Complexity and Linear Algebra, and on Algorithmic Foundations for Emerging Computing Technologies.

Sum-of-squares spectral amplification (SOSSA) is a new method for compiling efficient block encodings that exploits the low energy of the initial state and relies on sum-of-squares optimization. This talk by Caltech graduate student Robbie King describes the ideas behind the new technique, and in particular how sum-of-squares optimization connects to Hamiltonian simulation and phase estimation.

SP1 Hypercube is a new multilinear-based proof system for proving the correctness of programs written in a high-level programming language. In his recent talk in the Summer 2025 Cryptography program workshop on Proofs, Ron Rothblum (Succinct) gave an overview of how such real-world proof systems work, while focusing on a key novel component in Hypercube: the jagged polynomial commitment scheme.

In Spring 2025, the Simons Institute hosted a workshop on LLMs, Cognitive Science, Linguistics, and Neuroscience. In this episode of Polylogues, Spring 2025 Science Communicator in Residence Christoph Drösser sits down with one of the workshop’s organizers and presenters, psychology and neuroscience professor Steven Piantadosi (UC Berkeley).

During her talk at the Simons Institute’s workshop on The Future of Language Models and Transformers, Azalia Mirhoseini of Stanford University and Google DeepMind suggested that even small LLMs might “know” more than is obvious at first and can be made to answer questions correctly given enough compute. This theme — about LLMs and the knowledge they contain — played out in other talks in the same workshop, with speakers arguing that LLMs not only know, but also know that they know — an ability that can loosely be called metacognition.

Flip a coin — you get a 0 or 1. Flip 50 coins — you get 50 random bits. Flip 50 coins 50 times — you get … an error-correcting code. Or so said Claude Shannon, who came up with the concept in his seminal 1948 paper, “A Mathematical Theory of Communication.” An error-correcting code is an algorithm that enables the transmission of data in such a way that errors can be detected and corrected. Before Shannon, scientists assumed that the problem of noise could never be overcome in an unreliable communication channel. Shannon showed that this assumption was wrong.

Greetings from Berkeley, where we are in the final weeks of an exciting summer at the Simons Institute. Our Summer Cluster on Quantum Computing wound down a few weeks back after a period of intense activities. And our summer program on Cryptography has been continually abuzz with activity.

In this deliberately provocative two-part talk from the recent workshop on Theoretical Aspects of Trustworthy AI, Somesh Jha (University of Wisconsin) makes a case for applying a security and cryptography mindset to evaluating the trustworthiness of machine learning systems, particularly in adversarial and privacy-sensitive contexts.

AI for mathematics (AI4Math) is intellectually intriguing and crucial for AI-driven system design and verification. Formal mathematical reasoning is grounded in formal systems such as Lean, which can verify the correctness of reasoning and provide automatic feedback. This talk by Kaiyu Yang (Meta) from the Simons Institute and SLMath joint workshop on AI for Math and TCS introduces the basics of formal mathematical reasoning, focusing on two central tasks: theorem proving (generating formal proofs given theorem statements) and autoformalization (translating from informal to formal).

Decision problems about infinite groups are typically undecidable, but many are semidecidable if given an oracle for the word problem. One such problem is whether a group is a counterexample to the Kaplansky unit conjecture for group rings. In this talk from the workshop AI for Mathematics and Theoretical Computer Science, Giles Gardam (University of Bonn) presents the mathematical context and content of the unit conjecture, and explains how viewing the problem as an instance of the Boolean satisfiability problem (SAT) and applying SAT solvers show that it is not just solvable in theory but also in practice.