Compatible Reward Inverse Reinforcement Learning

Alberto Maria Metelli, Matteo Pirotta, and Marcello Restelli

Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, 2017. Acceptance rate: 678/3240 (20.9%)

Abstract
Inverse Reinforcement Learning (IRL) is an effective approach to recover a reward function that explains the behavior of an expert by observing a set of demonstrations. This paper is about a novel model-free IRL approach that, differently from most of the existing IRL algorithms, does not require to specify a function space where to search for the expert's reward function. Leveraging on the fact that the policy gradient needs to be zero for any optimal policy, the algorithm generates a set of basis functions that span the subspace of reward functions that make the policy gradient vanish. Within this subspace, using a second-order criterion, we search for the reward function that penalizes the most a deviation from the expert's policy. After introducing our approach for finite domains, we extend it to continuous ones. The proposed approach is empirically compared to other IRL methods both in the (finite) Taxi domain and in the (continuous) Linear Quadratic Gaussian (LQG) and Car on the Hill environments.

[Paper] [Poster] [Code] [BibTeX]

 @inproceedings{metelli2017compatible,
    author = "Metelli, Alberto Maria and Pirotta, Matteo and Restelli, Marcello",
    editor = "Guyon, Isabelle and von Luxburg, Ulrike and Bengio, Samy and Wallach, Hanna M. and Fergus, Rob and Vishwanathan, S. V. N. and Garnett, Roman",
    title = "Compatible Reward Inverse Reinforcement Learning",
    booktitle = "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, {USA}",
    pages = "2047--2056",
    year = "2017",
    url = "http://papers.nips.cc/paper/6800-compatible-reward-inverse-reinforcement-learning"
}