Abstract

Over the past five years, there has been considerable progress understanding the tradeoffs between the amount of available memory, and the number of samples required for various discrete learning problems.  In the domain of continuous optimization, however, far less is known.  This is despite the considerable practical importance of developing optimization algorithms that have both a small memory footprint and converge quickly (requiring few samples).  We will survey the few known results, and outline a number of intriguing open problems in this area.

Attachment

Video Recording