![Algorithms and Uncertainty_small text_hi-res logo](/sites/default/files/styles/workshop_banner_sm_1x/public/2023-01/Algorithms%20and%20Uncertainty_hi-res_1.jpg?h=6dcb57f1&itok=7LxcepO0)
Abstract
We investigate algorithms that learn to run faster: given a series of instances of a computational problem, these algorithms use characteristics of past instances to sharpen their performance for new instances. We have found such *self-improving* algorithms for sorting a list of numbers, and for some problems on planar point sets: computing Delaunay triangulations, coordinate-wise maxima, and convex hulls. Under the assumption that input instances are random variables, each algorithm begins with a training phase during which it adjusts itself to the distribution of the input instances; this is followed by a stationary regime in which the algorithm runs in its optimized version. In this setting, an algorithm must take expected time proportional to the input size n, and to the entropy E of the output: the input must be touched, and the output must be communicated. Our algorithms achieve O(n+E) expected running times in the stationary regime for all the problems mentioned except convex hulls, where O(n*loglog n + E) expected time is needed.
This work was done 2008-2012 by Nir Ailon, Bernard Chazelle, Ding Liu, Wolfgang Mulzer, C. Seshadhri, and the speaker.