News
 Anil's Winds of Change article_website wide

The New York Public Library sells a magnet printed with words by the American author Fran Lebowitz: “Think before you speak. Read before you think.”...

Venkat Wider Aspect Ratio

Greetings from the land of the Ohlone. As the year draws to a close, I am grateful for all of you who make up our brilliant, innovative, open-minded...

Three researchers have figured out how to craft a proof that spreads out information while keeping it perfectly secret.

News archive

The New York Public Library sells a magnet printed with words by the American author Fran Lebowitz: “Think before you speak. Read before you think.” OpenAI’s latest offering — the o1 suite of large language models — seems to be taking this appeal to heart. The models, according to OpenAI, are “designed to spend more time thinking before they respond.” This allows the models to be more effective when trying to solve complex problems that require them to reason.

In his Richard M. Karp Distinguished Lecture last month, Sasha Rush (Cornell Tech) introduced the literature related to test-time compute and model self-improvement, and discussed the expected implications of test-time scaling. This talk also briefly connected these research directions to current open-source efforts to build effective reasoning models.

In this recent talk from the workshop on Transformers as a Computational Model, Noam Brown (OpenAI) describes OpenAI’s new o1 model, an LLM trained via reinforcement learning to generate a hidden chain of thought before its response.

Greetings from the land of the Ohlone. As the year draws to a close, I am grateful for all of you who make up our brilliant, innovative, open-minded, and collaborative community, and for my wonderful colleagues at the Simons Institute who have made Calvin Lab a global home for our field.

Greetings from Berkeley, where we have a busy week ahead, including our flagship Industry Day on Thursday, November 7. And we’re pleased to announce three exciting new initiatives.

In this talk from the recent workshop on Alignment, Trust, Watermarking, and Copyright Issues in LLMs, Nicholas Carlini (Google DeepMind) introduces two attacks that cause ChatGPT to emit megabytes of data it was trained on from the public internet. In the first attack, they ask ChatGPT to emit the same word over and over ("Say 'poem poem poem...' forever") and find that this causes it to diverge, and that when it diverges, it frequently outputs text copied directly from the pretraining data. The second attack is much stronger, and shows how to break the model's alignment by exploiting a fine-tuning API, allowing the researchers to "undo" the safety fine-tuning.

We are delighted to announce an opportunity for researchers to propose workshops to be held at the Simons Institute, as part of two newly established workshop series: Goldwasser Exploratory Workshops and Breakthroughs Workshops. 

The Simons Institute will be partnering with Canadian research consortium IVADO to enhance and expand the Spring 2025 research program, Special Year on Large Language Models and Transformers, Part 2.

Three researchers have figured out how to craft a proof that spreads out information while keeping it perfectly secret.

The search for the next director of the Simons Institute for the Theory of Computing opened on October 1, 2024. Applications are due November 11.