Description
Monotonicity and agnostic learning
 
Uniform distribution learning from random examples of simple Boolean functions, such as DNFs or decision trees, remains an open problem despite intensive study. However, efficient and simple algorithms have been found for learning the monotone versions of these classes: for example, O'Donnell and Servedio (2007) gave an efficient algorithm for learning monotone decision trees. In these cases, monotonicity gives additional structural properties that can be exploited by learning algorithms. Other structural properties useful in uniform distribution learning, such as having low-degree or sparse approximators, can be leveraged in the agnostic setting. Does the advantage of monotonicity transfer to the agnostic setting? We give a super polynomial SQ lower bound for agnostic learning even monotone omega(1)-juntas (and hence monotone DTs). Our question reduces to answering the following: how large can the correlation be between balanced Boolean functions f and g, when f is monotone, and g has no nonzero Fourier coefficient of degree < d? We give a lower bound of 1-o(1) for d ~Omega(2^sqrt {log n}).
 
Joint work with Dana Dachman-Soled, Vitaly Feldman, Li-Yang Tan and Karl Wimmer.

All scheduled dates:

Upcoming

No Upcoming activities yet

Past