Results 551 - 560 of 23763
Large language models (LLMs) gain their encyclopedic knowledge and conversational tact by learning from an entire internet’s worth of human-generated text. But learning from language alone has shown diminishing returns. While LLMs have proved themselves to be masters of producing fluent language, their capabilities in other cognitive skills, like logic and reasoning, have lagged behind. At the Simons Institute workshop on LLMs, Cognitive Science, Linguistics, and Neuroscience last year, neuroscientists, linguists, and computer scientists came together to explore why this is the case — and how a different model, the human brain, could point the way forward.
The successes of generative AI and large language models involve both powerful observable behavior and deep internal representations of the world that they construct for their own uses. How do these internal representations work, and to what extent are they similar to or different from the representations of the world that we build as humans? In this talk, Jon Kleinberg explores these questions through the lens of generative AI, drawing on examples from game-playing, geographic navigation, and other complex tasks.
In this episode of our Polylogues web series, Simons Institute Founding Associate Director Alistair Sinclair interviews newly appointed Institute Director Venkatesan Guruswami. Their wide-ranging conversation touches on the Institute’s mission and strategy, prospects for the field in the years to come, and engagement with the global research community as well as the broader public.
Happy New Year from Berkeley, where the magnolias are already in bud and we have just welcomed the participants in our Spring 2026 research program on Federated and Collaborative Learning. In addition to the periodic workshops associated with the program, we also have upcoming workshops on various aspects at the nexus of theoretical computer science and machine learning, ranging from the deployment of ML models in social systems, healthcare, and deep learning theory to the impact of techniques developed in learning theory on the theory of computing.