Abstract

It has been demonstrated recently that for machine learning problems, modern stochastic optimization techniques such as stochastic dual coordinate ascent have significantly better convergence behavior than traditional optimization methods. In this talk I will present a broad view of this class of methods including some new algorithmic developments. I will also discuss algorithms and practical considerations in their parallel implementations.

Video Recording