Title: Decentralized Multi-Agent Learning in Queuing Systems
Abstract: Learning in multi-agent systems often poses significant challenges due to interference between agents. In particular, unlike classical stochastic systems, the performance of an agent's action is not drawn i.i.d. from some distribution but is directly affected by the (unobserved) actions of the other agents. This is the reason why most collaborative multi-agent learning approaches aim to globally coordinate all agents' actions to evade this interference.In this talk, we focus on bipartite queuing networks, a standard model for two-sided service systems, where N agents request service from K servers. Prior decentralized multi-agent learning approaches have the aforementioned "global coordination" flavor and therefore suffer from significant shortcomings: they are restricted to symmetric systems, have performance that degrades exponentially in the number of servers, require communication through shared randomness and unique identifiers, and are computationally demanding. In contrast, we provide a simple learning algorithm that, when run decentrally by each agent, avoids the shortcomings of "global coordination" and leads to efficient performance in general asymmetric bipartite queuing networks while also having additional robustness properties. Along the way, we provide the first UCB-based algorithm for the centralized case of the problem, which resolves an open question by Krishnasamy, Sen, Johari, and Shakkottai (NeurIPS'16 / OR'21).
Paper information: The paper on which this talk is based is joint work with Daniel Freund and Wentao Weng and can be found here: https://arxiv.org/abs/2206.