Efficient Inverse Reinforcement Learning of Transferable Rewards

Giorgia Ramponi*, Alberto Maria Metelli*, and Marcello Restelli

ICML-21 Workshop on Reinforcement Learning Theory, 2021.

Abstract
The reward function is widely accepted as a succinct, robust, and transferable representation of a task. Typical approaches, at the basis of Inverse Reinforcement Learning (IRL), leverage on expert demonstrations to recover a reward function. In this paper, we study the theoretical properties of the class of reward functions that are compatible with the expert’s behavior. We analyze how the limited knowledge of the expert’s policy and the environment affects the reward reconstruction phase. Then, we examine how the error propagates to the learned policy’s performance when transferring the reward function to a different environment. We employ these findings to devise a provably efficient active sampling approach, aware of the need for transferring the reward function, that can be paired with a large variety of IRL algorithms.

[Link] [BibTeX]

 @article{ramponi2021efficientW,
    author = "Ramponi*, Giorgia and Metelli*, Alberto Maria and Restelli, Marcello",
    title = "Efficient Inverse Reinforcement Learning of Transferable Rewards",
    journal = "ICML-21 Workshop on Reinforcement Learning Theory",
    year = "2021",
    url = "https://lyang36.github.io/icml2021\_rltheory/camera\_ready/22.pdf"
}