Nicholas Carlini (Google Brain)
Several hundred papers have been written over the last few years proposing defenses to adversarial examples. Unfortunately, most proposed defenses to adversarial examples are quickly broken.
This talk surveys the ways in which defenses to adversarial examples have been broken in the past, and what lessons we can learn from these breaks. Beginning with a discussion of common evaluation pitfalls when performing the initial analysis, I then provide recommendations for how we can perform more thorough defense evaluations. I conclude with a discussion on recent directions in adversarial robustness research and promising future directions for defenses.