Abstract
The nearest neighbor search (NNS) problem is defined as follows: Given a set P of n points in some metric space (X, D), build a data structure that, given any point q, returns a point in P that is (approximately) closest to q. In this tutorial, I will survey classic and more recent NNS data structures that are designed for the "high-dimensional" regime. In the first half, I talk about the current state of affairs for NNS over the l_1 and l_2 distances (in particular, Locality-Sensitive Hashing (LSH) and its data-dependent counterpart), while in the second half, I will focus on NNS for non-Euclidean geometries (including some of the very recent developments, such as spectral partitioning for metric spaces).