
In this month’s newsletter, we’re highlighting a 2015 talk by Chris Umans on some of the then state-of-the-art approaches to bound the matrix...

Greetings from Berkeley! We are gearing up for a busy wrap-up of the spring semester, with five back-to-back workshop weeks at the Simons Institute...

Ten years ago, researchers proved that adding full memory can theoretically aid computation. They’re just now beginning to understand the implications...

Simons Foundation International (SFI) has awarded a $25 million matching pledge to the Simons Institute for the Theory of Computing at the University of California, Berkeley, to build an ongoing stream of philanthropic revenue that will support the mission and research of the Institute.

Dear friends,
Greetings from Berkeley. At the Simons Institute, we are halfway through a vibrant semester of research and discovery. At the same time, like many of you, I am troubled and deeply saddened this week by the news of the enormous suffering and loss of life in Israel and Gaza. The series of devastating earthquakes in Afghanistan is also heartwrenching. I’m pleased to share news from the Institute with you, but must also acknowledge that all this is happening amidst a lot of turmoil elsewhere in the world.

Is ingesting in-copyright works posted on the open internet as training data for building large language models copyright infringement or not? The stakes for this nascent industry and for researchers in the resolution of this issue could not be greater. Presented by Pamela Samuelson (Berkeley Law) as part of the Simons Institute’s workshop on Large Language Models and Transformers.

On the first day of the workshop on Large Language Models and Transformers, Alexei Efros (UC Berkeley) moderated a panel that addressed a range of topics, including the future of LLMs, memorization vs. generalization, and novelty and creativity. Featuring Sanjeev Arora (Princeton University), Chris Manning (Stanford), Yejin Choi (University of Washington), Ilya Sutskever (OpenAI), and Yin Tat Lee (University of Washington and Microsoft Research).

This month, we are highlighting three presentations from our August 2023 workshop on Large Language Models and Transformers. In this talk, OpenAI cofounder and chief scientist Ilya Sutskever presents a theory of unsupervised learning.

In July 2023, Nikhil Srivastava joined the Simons Institute as interim senior scientist. And with Shafi Goldwasser on sabbatical for the fall semester, Venkatesan Guruswami is serving as acting interim director through December 2023. The pair sat down last month to discuss their new roles, Nikhil’s research, and the intersections of math and TCS.

“The workshop atmosphere was thick with expectation and excitement,” said UC Berkeley’s Alexei Efros, comparing it to what might have been the mood at another epochal moment in scientific history — the development of quantum physics in the early 1900s. “I imagine that a gathering of physicists at the dawn of the 20th century might have felt similar — everyone sensed that something big was coming, but it wasn’t quite clear what.”

Peter Bartlett has received the UC Berkeley Chancellor’s Distinguished Service Award, in recognition of his exceptional contributions to the Simons Institute for the Theory of Computing at UC Berkeley, where he served as associate director from 2017 to 2022.

Three years ago, in August 2020, the Simons Institute co-hosted a workshop on Decoding Communication in Nonhuman Species. Looking back at the titles of the talks, none used the AI buzzwords that have become household terms in the course of the last several months: ChatGPT, large language models, generative AI, chatbots. Yet the technology these terms refer to is central to the task at hand. The idea of the workshop was to apply cutting-edge methods from the field of natural language processing, especially large language models, to animal communication. This past June, the Institute co-hosted a follow-up workshop, Decoding Communication in Nonhuman Species II. Now that the technology is advancing at a breathtaking pace, this second workshop afforded participants an opportunity to take a look at how these efforts have advanced in the last three years — and whether the field has produced new tools that can help researchers understand what animals are talking about.