Abstract

To continue the successes of deep learning, it becomes increasingly important to better understand the phenomena exhibited by these models, ideally through a combination of systematic experiments and theory. Central to this challenge is a better understanding of deep representations. I overview adapting Canonical Correlation Analysis (SVCCA) as a tool to directly compare latent representations, across layers, training steps, and even different networks. The results highlight the difference between signal and noise stability in the representations, and shows us how we can representationally compress networks. We gain also insights into per-layer and per-sequence step convergence, along with differences between generalizing and memorizing networks. I next introduce a new testbed of environments for Deep Reinforcement Learning that lets us study different RL algorithms, single agent, multiagent and self play settings, and evaluate generalization in a systematic way. Finally, I show how these considerations can lead to defining more tractable prediction tasks in a healthcare setting.