Results 1481 - 1490 of 23856
Sum-of-squares spectral amplification (SOSSA) is a new method for compiling efficient block encodings that exploits the low energy of the initial state and relies on sum-of-squares optimization. This talk by Caltech graduate student Robbie King describes the ideas behind the new technique, and in particular how sum-of-squares optimization connects to Hamiltonian simulation and phase estimation.
SP1 Hypercube is a new multilinear-based proof system for proving the correctness of programs written in a high-level programming language. In his recent talk in the Summer 2025 Cryptography program workshop on Proofs, Ron Rothblum (Succinct) gave an overview of how such real-world proof systems work, while focusing on a key novel component in Hypercube: the jagged polynomial commitment scheme.
In Spring 2025, the Simons Institute hosted a workshop on LLMs, Cognitive Science, Linguistics, and Neuroscience. In this episode of Polylogues, Spring 2025 Science Communicator in Residence Christoph Drösser sits down with one of the workshop’s organizers and presenters, psychology and neuroscience professor Steven Piantadosi (UC Berkeley).
During her talk at the Simons Institute’s workshop on The Future of Language Models and Transformers, Azalia Mirhoseini of Stanford University and Google DeepMind suggested that even small LLMs might “know” more than is obvious at first and can be made to answer questions correctly given enough compute. This theme — about LLMs and the knowledge they contain — played out in other talks in the same workshop, with speakers arguing that LLMs not only know, but also know that they know — an ability that can loosely be called metacognition.