Abstract

Deep learning is a powerful tool for learning to interface with open-world unstructured environments, enabling machines to parse images and other sensory inputs, perform flexible control, and reason about complex situations. However, deep models depend on large amounts of data or experience for effective generalization, which means that each skill or concept takes time and often human labeling labor to acquire. In this talk, I will discuss how meta-learning, or learning to learn, can lift this burden, by learning how to learn quickly and efficiently from past experience on related but distinct tasks. In particular, I will discuss the frontier of what meta-learning algorithms can accomplish today and what open challenges remain to make these algorithms more practical and universally applicable. These challenges include the online meta-learning problem, where the algorithm must become faster at learning as it learns, the problem of constructing task distributions without human supervision, and what happens when these algorithms are applied on very broad task distributions.

Video Recording