Abstract

From the the fundamental theorem of statistical learning, one can learn any hypothesis class H with O(log|H|) labeled examples. Alas, learning with so few examples requires saving the examples in memory, and this requires |X|^O(log|H|) memory states, where X is the set of all labeled examples. 
A question that arises is how many labeled examples are needed in case the memory is bounded. 
 
Previous work showed, using techniques such as linear algebra and Fourier analysis, that parities cannot be learned with bounded memory.
One might wonder whether a combinatorial condition exists for the unlearnability with bounded memory of general classes H, as we have with the condition VCdim(H)=infinity for PAC unlearnability. 
 
We show such a combinatorial condition by demonstrating a surprising connection between an hypothesis class H being "mixing" and its unlearnability with bounded memory. 
More specifically, we show that if an hypothesis class H, when viewed as a bipartite graph between hypotheses H and labeled examples X, is mixing then it is unlearnable under a certain bound on the memory. 
Note that the class of parities is mixing. Additionally, as an immediate corollary, we get that most hypotheses classes are unlearnable with bounded memory (even though they are all learnable with unbounded memory). 
Our proof technique is combinatorial in nature and very different from previous work.
 
This is joint work with Michal Moshkovitz

Video Recording