Abstract

It has been known for decades that a polynomial-size training sample suffices for core tasks in supervised and unsupervised learning.  Many theoretical results, however, indicate that these learning tasks are computationally intractable.  Can we find new algorithms to circumvent these hardness results?  As case studies, we consider two fundamental classes of learning problems: learning neural networks (supervised) and learning graphical models (unsupervised).   We will explore computational/statistical tradeoffs in these areas and highlight recent algorithmic successes.  

Video Recording