![](/sites/default/files/styles/workshop_banner_sm_1x/public/big_data_recreate.jpg?h=e5e40fd7&itok=XI5JSPi6)
Abstract
We present a provably efficient, layer-by-layer, algorithm for training deep sum-product neural networks. Our algorithm comes with formal polynomial time convergence guarantees. Similar to other deep architectures, the resulting network can compactly represent any function on a finite training set. We also discuss provably efficient algorithms for training other types of neural networks.