Uncoupled and Convergent Learning in Two-Player Zero-Sum Markov Games with Bandit Feedback

Abstract

We revisit the problem of learning in two-player zero-sum Markov games, focusing on developing an algorithm that is uncoupled, convergent, and rational, with non-asymptotic convergence rates. We start from the case of stateless matrix game with bandit feedback as a warm-up, showing an $ O(T^{-1/8}) $ last-iterate convergence rate. To the best of our knowledge, this is the first result that obtains finite last-iterate convergence rate given access to only bandit feedback. We extend our result to the case of irreducible Markov games, providing a last-iterate convergence rate of $ O(T^{-1/(9+\varepsilon)}) $ for any $ \varepsilon > 0 $. Finally, we study Markov games without any assumptions on the dynamics, and show a path convergence rate, which is a new notion of convergence we defined, of $ O(T^{-1/(10}) $. Our algorithm removes the coordination and prior knowledge requirement of [Wei et al., 2021], which pursued the same goals as us for irreducible Markov games. Our algorithm is related to [Chen et al., 2021, Cen et al., 2021] and also builds on the entropy regularization technique. However, we remove their requirement of communications on the entropy values, making our algorithm entirely uncoupled.

Publication
The Thirty-Sixth Annual Conference on Neural Information Processing Systems (NeurIPS)