Abstract

Learning causal representations from observational data can be viewed as a task of identifying where and how the interventions were applied--this reveals information of the causal representations at the same time. Given that this learning task is a typical inverse problem, an essential issue is the establishment of identifiability results: one has to guarantee that the learned representations are consistent with the underlying causal process. Dealing with this issue generally involves appropriate assumptions. In this talk, I focus on learning latent causal variables and their causal relations, together with their relations with the measured variables, from observational data. I show what assumptions, together with instantiations of the "minimal change" principle, render the underlying causal representations identifiable across several settings. Specifically, in the i.i.d. case, the identifiability benefits from appropriate parametric assumptions on the causal relations and a certain type of "minimality" assumption. Temporal dependence makes it possible to recover latent temporally causal processes from time series data without parametric assumptions, and nonstationarity further improves the identifiability. I then draw the connection between recent advances in nonlinear independent component analysis and the minimal change principle. Finally, concerning the nonparametric setting with changing instantaneous causal relations, I show how to learn the latent variables with changing causal relations in light of the minimal change principle.

Attachment

Video Recording