![AI≡Science logo](/sites/default/files/styles/workshop_banner_sm_1x/public/2024-05/AI%E2%89%A1Science_0.jpg?h=6dcb57f1&itok=J3Is3j_l)
Abstract
Denoising diffusion models have led to a series of breakthroughs in image and video generation. In this talk, I will explore some of the connections between diffusion models and physics. Rooted in non-equilibrium thermodynamics, diffusion models enable a variety of extensions by lifting them into augmented spaces, encompassing position, momentum, and potentially additional auxiliary variables. This viewpoint gives rise to a “complete recipe” for constructing invertible diffusion processes, as well as new samplers that significantly reduce the number of sampling steps at test time. Additionally, thermodynamic processes offer a natural playground for generative AI. I will demonstrate how video diffusion models can effectively downscale precipitation patterns to finer scales, capturing extreme event statistics and local geographical patterns.