Abstract

The magnitude of the weights of a neural network is a fundamental measure of complexity that plays a crucial role in the study of implicit and explicit regularization. In recent work, gradient descent updates in overparameterized models asymptotically lead to solutions that implicitly minimize the ell_2 norm of the parameters of the model, resulting in an inductive bias that is highly architecture dependent. To investigate the properties of learned functions, it is natural to consider a function space view given by the minimum ell_2 norm of weights required to realize a given function with a given network. We call this the “induced regularizer” of the network. In this talk, I will focus on first theoretically characterizing the function space characterization of the induced regularizer in multi-channel linear convolutional networks. I will first introduce an SDP relaxation of the induced regularizer, that we show is tight for networks with a single input channel. Using this SDP formulation, we show that the induced regularizer is independent of the number of the output channels for single-input channel networks, and for multi-input channel networks, we show independence given sufficiently many output channels. Moreover, we show that as the kernel size increases, the induced regularizer interpolates between a basis-invariant norm and a basis-dependent norm that promotes sparse structures in Fourier space.

Video Recording