Spring 2018

Learning What Works Best When

Wednesday, May 2nd, 2018 2:00 pm2:40 pm

Add to Calendar

At the heart of most algorithms today there is an optimization engine trying to learn online and provide the best decision, for e.g. rankings of objects, at any time with the partial information observed thus far in time. Often it becomes difficult to find near optimal solutions to many problems due to their inherent combinatorial structure that leads to certain computational bottlenecks. Submodularity is a discrete analogue of convexity and is a key property often exploited in tackling combinatorial optimization problems. In the first part of the talk, we will focus on computational bottlenecks that involve submodular functions: convex function minimization over submodular base polytopes (for e.g. permutahedron) and movement along a line inside submodular base polytopes. We give a conceptually simple and strongly polynomial algorithm Inc-Fix for the former, which is useful in computing Bregman projections in first-order projection-based methods like online mirror descent. For the latter, we will bound the iterations of the discrete Newton method which gives a running time improvement by at least n^6 over the state of the art. This is joint work with Michel Goemans and Patrick Jaillet. In the second part of the talk, we will shift our focus from learning decisions (like rankings) to learning which heuristic for Max-Cut (and a related quadratic unconstrained binary optimization problem) would perform best with respect to the given data. With the help of a large scale evaluation using cloud computing and the construction of a feature space to represent inputs, we show that machine learning can be used to predict which heuristic will work best on a previously unseen problem instance. This is joint work with John Silberholz and Iain Dunning.

PDF icon learning-what-works-best-when.pdf3.8 MB