
Abstract
Work in cognitive semantics suggests human language comprehension is remarkably flexible, relying heavily on hierarchically structured background knowledge and conceptual mapping ability. In this talk I will describe some evidence from my lab that suggests metrics from large language models do a good job of predicting behavioral and neural responses to some aspects of human language. I go on to describe research on joke comprehension that highlights important differences in meaning processing in humans and the ‘understanding’ displayed by language models trained only on text corpora.