Abstract
In this tutorial talk, we will begin by reviewing several common methods for establishing lower bounds on the estimation error of statistical learning algorithms, such as the methods of Le Cam, Assoaud, and Fano. We will then discuss extensions of these methods which may be used to establish lower bounds under various constraints, e.g., privacy, robustness, and computational tractability. We will present examples where the lower bounds are tight, meaning that estimators exist which provably obtain the same rates, up to constant factors.