Abstract

In this short talk, I will share some recent progress on the hardness of learning shallow RELU neural networks (Relu-NN) and polynomially small adversarial noise. We will present a result that efficiently learning an 1-hidden layer Relu-NN under Gaussian input and adversarial noise is "cryptographically hard", in the sense that it implies a polynomial-time quantum algorithm for the worst-case shortest vector problem.

Video Recording