Abstract
Machine learning can greatly benefit from providing learning algorithms with pairs of contrastive training examples---typically pairs of instances that differ only slightly, yet have different class labels. Intuitively, the difference in the instances serves as a means of explaining the difference in the class labels. This talk proposes a theoretical framework in which the effect of various types of contrastive examples on active learners is studied formally. The focus is on the sample complexity of learning concept classes and how it is influenced by the choice of contrastive examples. Specific concept classes we study consist either of geometric concepts or of Boolean functions. Interestingly, we reveal a connection between learning from contrastive examples and the classical model of self-directed learning.
(Joint work with Farnam Mansouri, Hans U. Simon, Adish Singla, and Yuxin Chen)