Let’s Stop Leaving Money on the Table | Richard M. Karp Distinguished Lecture
In many areas of machine learning, theory and practice have undergone a dramatic divergence; there is extremely little theory to guide our understanding of much of modern AI. Theory is useful in guiding practice when it can help explain or predict empirical performance, but even in the increasingly rare settings where we have substantial theory to accompany practice, the theory often makes pessimistic predictions despite demonstrated empirical success. Some of what’s likely called for is revolutionary new theory. In this Richard M. Karp Distinguished Lecture, however, Katrina Ligett (Hebrew University) explored a more conservative idea: that we sometimes approach the theory of learning in a way that leaves money on the table. In particular, she suggested that there is room for reformulating theoretical questions in machine learning in new ways that can help close the gaps between theory and practice, and that these reformulations can suggest new mathematical challenges that are interesting in their own right.
Machine learning theory often considers worst-case computations on worst-case data distributions, but empirically we may care more about the performance of a particular computation on a particular dataset that may enjoy favorable structure. One approach to leaving less money on the table involves taking advantage of the properties of our algorithms that lead to their success; another approach considers instance-specific (as opposed to worst-case) analysis. Ligett illustrated these ideas with a few examples from her own work on generalization and on privacy in machine learning, in the hopes of surfacing perspectives that may help us all chip away at the gaps between theory and practice.