Recall November 6, 2024 — the day after the U.S. election. I was driving back to my home in Washington, DC, from Ohio with colleagues. I was...
This July, the Simons Institute co-hosted, in collaboration with Project CETI (Cetacean Translation Initiative) and Oceankind, the fourth annual...
As part of the Algorithmic Foundations for Emerging Computing Technologies Boot Camp, David Patterson (UC Berkeley) reviews the drivers of computer architecture (Moore’s law, Dennard scaling, domain-specific architectures, the roofline performance model) and upcoming critical challenges (deceleration of memory bandwidth and capacity, power, carbon footprint) and opportunities (chiplets, high-bandwidth memory, high-bandwidth flash).
Recall November 6, 2024 — the day after the U.S. election. I was driving back to my home in Washington, DC, from Ohio with colleagues. I was heartbroken not because of the rebuke to my political party, but because of the accompanying rebuke to scientists and expertise in government. Just hours earlier, I had imagined a very different future. As an AI researcher and policymaker, I had dreamed about landing my AI policy priorities in legislation.
Warm greetings from Berkeley, where our Fall 2025 research programs on Complexity and Linear Algebra, and on Algorithmic Foundations for Emerging Computing Technologies, are in full swing. There is a seminar talk or reading group meeting pretty much every day, and the two programs are also discovering interesting synergies and exploring holding a joint seminar series.
In his presentation in the Complexity and Linear Algebra Boot Camp, Senior Scientist Nikhil Srivastava defines the problem of approximately diagonalizing a given dense matrix, and explains two phenomena that impede the convergence of diagonalization algorithms and complicate their analysis.
In this episode of Polylogues, Science Communicator in Residence Lakshmi Chandrasekaran sits down with two of the senior participants in our Summer 2025 Cryptography program, Yael Tauman Kalai (MIT) and Daniele Micciancio (UC San Diego).
This July, the Simons Institute co-hosted, in collaboration with Project CETI (Cetacean Translation Initiative) and Oceankind, the fourth annual workshop on Decoding Communication in Nonhuman Species. This series of workshops brings together researchers in machine learning, signal processing, data science, linguistics, robotics, and bioacoustics to explore the challenges and current state of the art in the study of nonhuman species communication.
In his contribution to the workshop on Decoding Communication in Nonhuman Species IV, Markus Freitag (Google) surveyed the rise of LLM-driven translation and its near-human performance in high-resource languages. He emphasized, however, that the “end of the language barrier” will require more than textual training data, especially for low-resource or nonhuman languages.
In this talk from the recent workshop on Decoding Communication in Nonhuman Species IV, Adam Kalai (OpenAI) explores how to evaluate machine translation systems in the absence of ground-truth reference translations, focusing on the extreme case where only acoustic outputs are available, without contextual or visual grounding.
On August 1, 2025, Simons Institute Science Communicator in Residence Lakshmi Chandrasekaran sat down with Moni Naor, one of the participants in this summer’s research program on cryptography, for a wide-ranging discussion of Naor’s path in the field, intersections of cryptography and complexity, the cryptographic technology behind CAPTCHA, and highlights of his own research.