
Abstract
Learning-augmented algorithms aim to combine the reliability of worst-case guarantees with the efficiency gains of machine-learned predictions, offering a promising framework for decision-making under uncertainty. A central challenge is determining how much to trust the predictions: most existing approaches rely on a single global trust parameter, failing to leverage the rich, instance-specific uncertainty estimates generated by modern predictors. This talk explores calibration—the alignment between predicted probabilities and observed frequencies—as a principled and practical way to bridge this gap. We illustrate the power of calibrated advice through two case studies: ski rental and online job scheduling. For ski rental, we design an algorithm with near-optimal prediction-dependent performance and show that, in high-variance settings, calibrated advice outperforms alternative uncertainty quantification methods. For job scheduling, calibrated predictions yield substantial performance gains over prior approaches. Experiments on real-world datasets validate our theoretical results, demonstrating the practical value of calibration in learning-augmented algorithm design. This talk is based on joint work with Judy Hanwen Shen and Anders Wikum.