Abstract

Existing multimodal models typically have custom architectures that are designed for specific modalities (image->text, text->image, text only, etc). In this talk, I will present our recent work a series of early fusion mixed-modal models trained on arbitrary mixed sequences of images and text. I will discuss and contrast two models architectures, Chameleon and Transfusion, that make very different assumptions about how to model mixed-modal data, and argue for moving form a tokenize-everything approach to newer models that are hybrids of autoregressive transformers and diffusion. I will also cover recent efforts to better understand how to more stably train such models at scale without excessive modality competition, using a mixture of transformers technique. Together, these advances lay a possible foundation for universal models that can understand and generate data in any modality, and I will also sketch some of the steps that we still need to focus on to reach this goal.