Description
Speaker: Vidya Muthukumar
Title: No-regret learning in repeated games under realization-based feedback: Time-averaged and last-iterate convergence
 
Abstract: This talk reviews recent advances in the study of two related objectives for no-regret learning in repeated (will mostly focus on zero-sum) finite normal-form games — fast time-averaged convergence to Nash equilibrium, and last-iterate convergence to Nash equilibrium — from the point of view of “realization-based” feedback. We note a critical distinction between the “telepathic” feedback model, where players observe their opponents’ mixtures, and the “realization-based” feedback model, where players observe only their opponents’ realizations. At a high level, the realization-based model introduces extra stochasticity into the system which makes it harder to achieve either fast rates of time-averaged convergence or last-iterate convergence.
 
We first briefly review the fast time-averaged convergence results to zero-sum Nash equilibrium provided for optimistic algorithms by Rakhlin and Sridharan [RS13], and explain why these fast rates do not in general continue to hold under realization-based feedback. Next, we introduce the class of smooth games for which [FLLST16] show fast time-averaged convergence rates under realization-based feedback, and elucidate the special properties of smooth games that make this possible. Finally, we introduce our recent work that provides impossibility results for the last-iterate convergence of a large class of optimal no-regret algorithms for general zero-sum games, including optimistic and data-adaptive online algorithms [MPS20]. 
 
(This talk will primarily use and explain tools from discrete-time adversarial online learning and regret minimization. However, related perspectives from stochastic optimization and dynamical systems theory may be mentioned when relevant.) 
 
References:
[RS13] Alexander Rakhlin and Karthik Sridharan: “Optimization, learning, and games with predictable sequences”.
[FLLST16] Dylan J. Foster, Zhiyuan Li, Thodoris Lykouris, Karthik Sridharan, Eva Tardos: “Learning in games: Robustness of fast convergence”.
[MPS20] Vidya Muthukumar, Soham Phade and Anant Sahai: “On the impossibility of convergence of mixed strategies arising from optimal no-regret learning”.
 
Bio: Vidya Muthukumar is an Assistant Professor in the Schools of Electrical and Computer Engineering and Industrial and Systems Engineering at Georgia Institute of Technology. Her broad interests are in game theory, online and statistical learning. She is particularly interested in designing learning algorithms that provably adapt in strategic environments, the intersection of algorithmic game theory and reinforcement learning, and the theory of overparameterized machine learning. Vidya received the PhD degree in Electrical Engineering and Computer Sciences from University of California, Berkeley. She is the recipient of the Adobe Data Science Research Award, the Simons-Berkeley Research Fellowship (for the Fall 2020 program on "Theory of Reinforcement Learning"), and a Georgia Tech Class of 1969 Teaching Fellowship for the academic year 2021-2022. 
 
==============================================================
 

Speaker: Ioannis Panageas
Title: Global Convergence of Multi-Agent Policy Gradient in Markov Potential Games

Abstract: Potential games are arguably one of the most important and widely studied classes of normal form games. They define the archetypal setting of multi-agent coordination as all agent utilities are perfectly aligned with each other via a common potential function. Can this intuitive framework be transplanted in the setting of Markov Games? What are the similarities and differences between multi-agent coordination with and without state dependence? We present a novel definition of Markov Potential Games (MPG) that generalizes prior attempts at capturing complex stateful multi-agent coordination. Counter-intuitively, insights from normal-form potential games do not carry over as MPGs can consist of settings where state-games can be zero-sum games. In the opposite direction, Markov games where every state-game is a potential game are not necessarily MPGs. Nevertheless, MPGs showcase standard desirable properties such as the existence of deterministic Nash policies. In our main technical result, we prove fast convergence of independent policy gradient (and its stochastic variant) to Nash policies by adapting recent gradient dominance property arguments developed for single agent MDPs to multi-agent learning settings.  

Bio: Ioannis is an Assistant Professor of Computer Science at UCI. He is interested in the theory of computation, machine learning and its interface with non-convex optimization, dynamical systems, multi-agent reinforcement learning, probability and statistics. Before joining UCI, he was an Assistant Professor at Singapore University of Technology and Design. Prior to that he was a MIT postdoctoral fellow. He received his PhD in Algorithms, Combinatorics and Optimization from Georgia Tech in 2016, a Diploma in EECS from National Technical University of Athens, and a M.Sc. in Mathematics from Georgia Tech. He is the recipient of the 2019 NRF fellowship for AI.

All scheduled dates:

Upcoming

No Upcoming activities yet