Abstract
Threshold classifiers are known to be impossible to learn in the adversarial sequential setting under binary loss, despite being one of the simplest cases in statistical setups. We investigate the implications of this impossibility for other loss functions, including quadratic, logarithmic, and hinge losses, and explore ways to bypass these challenges. We revisit sequential linear regression, classification, and logistic regression in scenarios where design vectors are known in advance but unordered, adapting the pathway known for the binary case to address limitations posed by threshold hardness across various losses. Using a version of the exponential weights algorithm with data-dependent (transductive) priors, we derive regret bounds that allow unbounded norms of optimal solutions. Additionally, we show that classification regret bounds depend only on dimension and the number of rounds, rather than on specific design vectors or norms—both of which are provably unattainable without prior knowledge of design vectors.