Abstract
Learning in the presence of distribution shift, and analyzing when this can be done remains a major challenge in machine learning. In this work, we will talk about two new directions of analysis. The first is when training data comes from multiple subpopulations, and test data is a re-weighted version of these subpopulations. In this case, we will show through a new direction of analysis that surprisingly, the tails of the distributions matter. The second is when an Invariant Risk Minimization-like condition holds on the data; in this case, we will characterize conditions on source and target distributions under which learning can happen.