The New York Public Library sells a magnet printed with words by the American author Fran Lebowitz: “Think before you speak. Read before you think.”...
Greetings from the land of the Ohlone. As the year draws to a close, I am grateful for all of you who make up our brilliant, innovative, open-minded...
Three researchers have figured out how to craft a proof that spreads out information while keeping it perfectly secret.
Greetings from Berkeley! I’m delighted to be writing my first update to you as interim director since Shafi stepped down in August after six-and-a-half years of visionary service.
Aditi Raghunathan (CMU)’s presentation in the Large Language Models and Transformers, Part 1 Boot Camp addresses the root causes of numerous safety concerns and wide-ranging attacks on current large language models.
In this talk from the Modern Paradigms in Generalization Boot Camp, John Duchi (Stanford) provides an overview of some of the history behind robust optimization, including modern machine learning via connections with different types of robustness.
Some results mark the end of a long quest, whereas others open up new worlds for exploration. Then there are works that change our perspective and make the familiar unfamiliar once more. I am delighted to tell you about some recent developments of all three kinds.
In this episode of our Polylogues web series, Summer 2023 science communicator in residence Lakshmi Chandrasekaran interviews former Simons Institute Scientific Advisory Board member and program organizer Irit Dinur. Their wide-ranging conversation touches on Irit’s career and research, the trajectory from basic science to practice, upcoming directions for the field, and gender distribution and climate in computer science.
As part of the recent workshop on Extroverted Sublinear Algorithms, Ronitt Rubinfeld (MIT) surveyed two directions in which sublinear-time algorithms are impacting the design and use of learning algorithms.
Can machines prove theorems? Can they have mathematical ideas? In this talk from our Theoretically Speaking public lecture series, Jordan Ellenberg (University of Wisconsin–Madison) spoke about his joint work with researchers from DeepMind (which used novel techniques in machine learning to make progress in a problem in combinatorics) and charted some near-term ways that machine learning may affect mathematical practice.
This will be my final letter to you as director of the Simons Institute for the Theory of Computing, as my six-and-a-half-year term ends at the end of this month. These were years well lived. Both the Institute and I experienced challenge, growth, and ultimately a leap into the future. I am proud to have served.
On July 1, Sampath Kannan became the Simons Institute’s new associate director. A UC Berkeley alumnus (PhD 1989), Sampath is the Henry Salvatori Professor in the Department of Computer and Information Science at the University of Pennsylvania. We sat down with him to discuss his research interests, vision for his new role, and perspectives on how developments in AI are transforming the field of theoretical computer science.
For some time, I’ve argued that a common conception of AI is misguided. This is the idea that AI systems like large language and vision models are individual intelligent agents, analogous to human agents. Instead, I’ve argued that these models are “cultural technologies” like writing, print, pictures, libraries, internet search engines, and Wikipedia.