Abstract

Rich model classes are desirable to handle complex problems, but they may exclude familiar uniform consistency guarantees. To enable handling rich probabilistic model classes in line with our expectations of what data must achieve, we outlined the \emph{data-driven} consistency framework during the Information Theory and Big Data workshop. In this talk, we summarize further developments on this topic from the viewpoint of the ubiquitous machine learning tools of regularization and cross-validation.