Abstract

Fast iterative methods developed in Convex Optimization play a crucial role in a number of recent breakthroughs in the design of nearly-linear-time algorithms for fundamental combinatorial problems, such as maximum flow and graph partitioning. Multiplicative Weight Updates, Gradient Descent, and Nesterov’s Method have become a mainstay in the construction and analysis of fast algorithms. The power of these approaches raises a number of questions: What is the exact relation between Multiplicative Weight Updates and Gradient Descent? Why do Multiplicative Weight Updates show up in so many settings? What is the intuition behind Nesterov’s iterative algorithm that achieves the asymptotically optimal iteration count for smooth convex functions?

In this survey talk, I will provide some answers by presenting a unified framework that reveals the power and limitations of each method and provides much needed geometric intuition. Among other insights, I will explain how Multiplicative Weight Updates are a particular instance of a dual iterative method, known as Mirror Descent, and how Nesterov’s algorithm can be naturally derived as a combination of Mirror and Gradient Descent.

As an example of the application of these insights, I will also discuss their crucial role in recent progress in the nearly-linear-time solution of packing linear programs and in the construction of linear-sized sparsifiers.

The material in this talk is joint work with Zeyuan Allen-Zhu (MIT) and Zhenyu Liao (BU).

Video Recording