Abstract

Conventional wisdom in machine learning taboos training on the test set, interpolating the training data, and optimizing to high
precision. This talk will present evidence demonstrating that this conventional wisdom is wrong. I will additionally highlight commonly
overlooked phenomena imperil the reliability of current learning systems: surprising sensitivity to how data is generated and significant diminishing returns in model accuracies given increased compute resources. I will close with a discussion of how new best practices to mitigate these effects are critical for truly robust and reliable machine learning.

Video Recording