![Large-language-models-transformers for web page](/sites/default/files/styles/workshop_banner_sm_1x/public/2023-07/LLM.jpg?h=dbd53b56&itok=nSG-218e)
Abstract
Gasper Begus: LLMs as linguists: Using linguistic formalism as a window into LLMs metacognitive abilities
Jia Xu: Infinite Reservoir Transformer
Fereshte Khani: collaborative development of NLP models
Weijie Su: Reward Collapse in Aligning Large Language Models
Song Mei: Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection
Bob Carpenter: Soft RLHF training with probabilistic rating models