Abstract

We consider aggregation methods that take as input a number of forecast-weight pairs and output a single aggregate forecast; linear pooling (i.e., weighted averaging) and logarithmic pooling (based on geometric rather than arithmetic means) are two canonical examples.  We study the problem of learning expert weights in an online setting and pursue computationally efficient learning algorithms with sublinear regret (with respect to the total ex post loss incurred, with expert forecasts and realized outcomes chosen adversarially, as measured by some proper loss function, such as squared or log loss).  We show that the feasibility of this learning problem depends on the choice of the loss function L and aggregation method \chi.  

We describe *QA pooling*, which is a general way to associate a proper loss function L with a corresponding aggregation method \chi_L: given expert forecasts p_1,...,p_n and weights, \chi_L outputs the forecast p^* that minimizes the weighted average of the Bregman divergences (w.r.t. to the Savage representation of L) between p^* and the p_i's.  We show that this correspondence enjoys several appealing mathematical properties, and in particular that, for any choice of experts' forecasts and a realized outcome, the loss (according to L, with aggregation method \chi_L) is a convex function of experts' weights.  This implies that, for every bounded proper loss function L and QA pooling method \chi_L, online gradient descent learns expert weights while suffering only sublinear regret.  

For the log loss function (which is not bounded) and corresponding QA pooling method (namely, logarithmic pooling), we show that positive learning results are possible only under additional restrictions on the adversary.  We propose forcing the adversary to choose realized outcomes that are consistent with experts' forecasts---formally, we insist that every expert is (when viewed individually) a calibrated forecaster.  The adversary retains the power to choose essentially arbitrary correlations between experts' forecasts.   With calibrated experts, for learning expert weights with respect to the log loss function and logarithmic pooling method, we show that online mirror descent with a suitable regularizer achieves sublinear regret.

Finally, we propose a robust theoretical justification for the practice of *extremization*.  The goal here is to predict a random variable Y.  Extremization makes sense when E[Y] is known (the "known prior" setting), in which case it advocates making a prediction that is even farther (in the same direction) from E[Y] than the average of the experts' predictions.  This idea is intuitively justified when experts base their predictions on independent evidence---e.g., arriving at similar predictions but for different reasons.  We show that, with calibrated experts and under an informational substitutes-type condition, extremization achieves a robust performance guarantee that is provably superior to that achieved by any prior-free aggregation method.

Joint work with Eric Neyman.
Results are drawn from the following three papers: 
https://arxiv.org/pdf/2102.07081.pdf
https://arxiv.org/pdf/2202.11219.pdf
https://arxiv.org/pdf/2111.03153.pdf

Video Recording