Learning in Non-Cooperative Configurable Markov Decision Processes

Giorgia Ramponi, Alberto Maria Metelli, Alessandro Concetti, and Marcello Restelli

Advances in Neural Information Processing Systems 34 (NeurIPS), 2021.

Acceptance rate: 2344/9122 (25.7%)
CORE 2021: A*   GGS 2021: A++

Abstract
The Configurable Markov Decision Process framework includes two entities: a Reinforcement Learning agent and a configurator that can modify some environmental parameters to improve the agent's performance. This presupposes that the two actors have the same reward functions. What if the configurator does not have the same intentions as the agent? This paper introduces the Non-Cooperative Configurable Markov Decision Process, a setting that allows having two (possibly different) reward functions for the configurator and the agent. Then, we consider an online learning problem, where the configurator has to find the best among a finite set of possible configurations. We propose two learning algorithms to minimize the configurator's expected regret, which exploits the problem's structure, depending on the agent's feedback. While a naive application of the UCB algorithm yields a regret that grows indefinitely over time, we show that our approach suffers only bounded regret. Furthermore, we empirically show the performance of our algorithm in simulated domains.

[Link] [Poster] [BibTeX]

 @incollection{metelli2021learning,
    author = "Ramponi, Giorgia and Metelli, Alberto Maria and Concetti, Alessandro and Restelli, Marcello",
    title = "Learning in Non-Cooperative Configurable Markov Decision Processes",
    booktitle = "Advances in Neural Information Processing Systems 34 (NeurIPS)",
    year = "2021",
    pages = "22808--22821",
    url = "https://proceedings.neurips.cc/paper/2021/hash/c0f52c6624ae1359e105c8a5d8cd956a-Abstract.html"
}