Abstract

Many applications involve non-Euclidean data, where exploiting Riemannian geometry can deliver algorithms that are computationally superior to standard nonlinear programming approaches. This observation has resulted in an increasing interest in Riemannian methods in the optimization and machine learning community. In this talk, we consider the problem of optimizing a function on a Riemannian manifold subject to convex constraints. We introduce Riemannian Frank-Wolfe (RFW) methods, a class of projection-free algorithms for constrained geodesically convex optimization. To understand the algorithm’s efficiency, we discuss (1) its iteration complexity, (2) the complexity of computing the Riemannian gradient, and (3) the complexity of the Riemannian “linear” oracle (RLO), a crucial subroutine at the heart of the algorithm. We complement our theoretical results with an empirical comparison of RFW against state-of-the-art Riemannian optimization methods. Joint work with Suvrit Sra (MIT).

Video Recording