Jascha Sohl-Dickstein (Google Brain)
The success of deep learning has hinged on learned functions dramatically outperforming hand-designed functions for many tasks. However, we still train models using hand designed parameter update rules acting on hand designed loss functions. I will argue that these hand designed components are typically mismatched to the desired model behavior, and that we can expect meta-learned update rules and loss functions to perform better. I will introduce meta-learned parameter update rules targeting unsupervised representation learning, semi-supervised learning, data augmentation, and supervised classification. I will show that these update rules can be made biologically plausible, allowing us to meta-learn algorithms which might be used by biological brains. I will additionally discuss common pathologies and challenges that are encountered when meta-learning an update rule, and demonstrate solutions to some of these pathologies.