Abstract

Real-time decision-making is often very complicated since decisions of today may negatively affect the future, by leading the underlying system in an undesired state. There are two prominent ways to try to plan against such undesired outcomes. On the one hand, the main machine learning and computer systems approach is to extrapolate patterns in the data and obtain useful predictions about the future. On the other hand, theoretically competitive online algorithms hedge their decisions against any possible future event to achieve robust worst-case performance guarantees. However, both of these approaches have important shortcomings: the first often lacks principled guarantees and can be prone to errors or adversarial examples, while the second is often tailored towards the worst case and, as a result, cannot meaningfully use existing patterns in data.

In this talk, I will tackle this mismatch by aiming for the best of both worlds: using the predictive power of machine learning while guaranteeing the worst-case robustness of online algorithms. I will focus on the paging or caching problem, which exemplifies the above mismatch as we have a) many useful predictions from heuristics or deep learning with weak theoretical guarantees, while also having b) pessimistic algorithms with robust worst-case guarantees. I will show a way to slightly adapt a classical competitive online algorithm (Marker) to incorporate predictions in a way to get leverage when they are relatively accurate while also retaining the worst-case competitiveness. This is a surprisingly powerful technique that, on the side, can enable to robustify classical heuristics such as LRU, that tend to perform well in practice without having nice worst-case guarantees.