Abstract

In the worst-case analysis of algorithms, the overall performance of an algorithm is summarized by its worst performance on any input.  This approach has countless success stories, but there are also important computational problems --- like linear programming, clustering, and online caching --- where the worst-case analysis framework does not provide any helpful advice on how to solve the problem.  This tutorial covers a number of modeling methods for going beyond worst-case analysis and articulating which inputs are the most relevant.
 
The first part of the tutorial focuses on deterministic models --- well-motivated conditions on inputs that explain when heuristics work well.  It explains why "clustering is hard only when it doesn't matter," and in particular covers the notions of approximation and perturbation stability, with applications to the k-median and multiway cut problems.  It also touches on the field of sparse recovery --- specifically, compressive sensing, l1-minimization, and the RIP condition --- to highlight the strong similarities with the other topics covered.
 
The second part of the tutorial covers probabilistic models, which make some type of distributional assumption. We pass over traditional average-case analysis, which assumes a known input distribution, in favor of more robust performance guarantees.  Specific topics include: planted and semi-random models for clique and bisection; smoothed analysis, with applications to linear programming and local search; a survey of additional "hybrid" analysis models that interpolate between worst- and average-case analysis; and the use of "average-case thought experiments" to generate novel deterministic input restrictions (like the RIP condition and graph classes for social networks) and performance benchmarks (as in no-regret online learning and prior-free auction design).
 

The first session of this mini course will take place on Thursday, August 25th, 2016 9:30 am – 10:30 am.

Video Recording