Abstract

We are motivated by applications that need rich model classes to represent the application, such as the set of all discrete distributions over large, countably infinite supports. But such rich classes may be too complex to admit estimators that converge to the truth with convergence rates that can be uniformly bounded over the entire model class as the sample size increases (uniform consistency). However, these rich classes may still allow for estimators with pointwise guarantees whose performance can be bounded in a model-dependent way. But the pointwise angle has a drawback as well—estimator performance is a function of the very unknown model that is being estimated, and is therefore unknown. Therefore, even if an estimator is consistent, how well it is doing may not be clear no matter what the sample size.

Departing from the uniform/pointwise dichotomy, we consider a a new framework that modifies the world of pointwise consistent estimators—retaining as far as possible the richness of model classes possible but ensuring that all information needed about the unknown model to evaluate estimator accuracy can be derived from the data. We call this "data-derived pointwise" consistency. As we delve deeper into this formulation, we will see that data-driven consistency shifts focus from the global complexity of model class to the local variation of properties within model classes.

Video Recording