Description

More than half a century of research in theoretical computer science has brought us a great wealth of advanced algorithmic techniques. These techniques can be combined in a variety of ways to provide us with sophisticated, often beautifully elegant algorithms. This diversity of methods is truly stimulating and intellectually satisfying. But is it also necessary?

In this talk, I will address this question by discussing one of the most, if not the most, fundamental continuous optimization technique: the gradient descent method. I will describe how this method can be applied, sometimes in a quite surprising manner, to a number of classical algorithmic tasks, such as the maximum flow problem, the bipartite matching problem, and the k-server problem, as well as matrix scaling and balancing. The resulting perspective will provide us with a broad, unifying view on this diverse set of problems. It also turned out to be key to making the first progress in decades on each one of these problems.

Light refreshments will be served before the lecture at 3:30 p.m.

YouTube Video
Remote video URL

All scheduled dates:

Upcoming

No Upcoming activities yet