Abstract

This paper studies few-shot learning via representation learning, where one uses T source tasks with n1 data per task to learn a representation in order to reduce the sample complexity of a target task for which there is only n2(≪n1) data. Specifically, we focus on the setting where there exists a good \emph{common representation} between source and target, and our goal is to understand how much of a sample size reduction is possible. First, we study the setting where this common representation is low-dimensional and provide a fast rate of O(C(Φ)n1T+kn2); here, Φ is the representation function class, C(Φ) is its complexity measure, and k is the dimension of the representation. When specialized to linear representation functions, this rate becomes O(dkn1T+kn2) where d(≫k) is the ambient input dimension, which is a substantial improvement over the rate without using representation learning, i.e. over the rate of O(dn2). Second, we consider the setting where the common representation may be high-dimensional but is capacity-constrained (say in norm); here, we again demonstrate the advantage of representation learning in both high-dimensional linear regression and neural network learning. Our results demonstrate representation learning can fully utilize all n1T samples from source tasks.