Abstract
We often analyze machine learning algorithms under the assumption that our data comes from a nice generative model. But the dirty secret is that sometimes these algorithms can be overfit to the modeling assumptions in subtle and surprising ways. In this talk I will discuss some examples illustrating how semi-random models offer new perspectives on questions of robustness and generalization, and lead to new algorithmic challenges.