Abstract

In this talk, I will present a simple reduction that demonstrates the cryptographic hardness of learning a single periodic neuron over isotropic Gaussian distributions in the presence of noise. The proposed "hard" family of functions, which are well-approximated by one-layer neural networks, take the general form of a univariate periodic function applied to an affine projection of the data. These functions have appeared in previous seminal works which demonstrate their hardness against gradient-based (GD) methods (Shamir'18), and Statistical Query (SQ) algorithms (Song et al.'17). Our result shows that if (polynomially) small noise is added to the labels, the intractability of learning these functions applies to all polynomial-time algorithms, beyond gradient-based and SQ algorithms, under the aforementioned cryptographic assumptions. Furthermore, we show that for exponentially small noise a polynomial-time algorithm based on lattice basis reduction can bypass the SQ and GD hardness. This is based on joint work with Min Jae Song and Joan Bruna.

Video Recording