Abstract

Data can be corrupted in many ways: via outliers, measurement errors, failed sensors, batch effects, and so on. Standard maximum likelihood learning will either reproduce there errors or fail to converge entirely. Given this, what learning objectives should we use instead? I will present a general framework for studying robustness to different families of errors in the data, and use this framework to provide guidance on designing error-robust estimators.

Attachment

Video Recording