Handling Non-Stationary Experts in Inverse Reinforcement Learning: A Water System Control Case Study

Amarildo Likmeta, Alberto Maria Metelli, Giorgia Ramponi, Andrea Tirinzoni, Matteo Giuliani, and Marcello Restelli

Challenges of Real-World RL Workshop @ NeurIPS 2020, 2020.

Abstract
One of the challenges for applying Reinforcement Learning (RL) in real-world scenarios is the absence of a formalized reward signal, especially in presence of multiple, possibly conflicting, objectives. However, observational data of many real systems are nowadays available, providing demonstrations from experts (e.g., human operators) that can be used in Inverse Reinforcement Learning (IRL) to formalize the observed task in an RL fashion. In this paper, we address the problem of inferring the preferences of the historical operation of Lake Como.In this case study, no interaction with the environment is allowed, and only a fixed dataset of demonstrations is available. Moreover, the expert is non-stationary since its intentions change during decades when exposed to changing external forces. For this reason, we propose an extension of the batch model-free algorithm Σ-GIRL to the non-stationary case. For the Lake Como scenario we provide formalization, experiments and a discussion to interpret the obtained results.

[Link] [Talk] [BibTeX]

 @article{likmeta2020handling,
    author = "Likmeta, Amarildo and Metelli, Alberto Maria and Ramponi, Giorgia and Tirinzoni, Andrea and Giuliani, Matteo and Restelli, Marcello",
    title = "Handling Non-Stationary Experts in Inverse Reinforcement Learning: A Water System Control Case Study",
    journal = "Challenges of Real-World RL Workshop @ NeurIPS 2020",
    year = "2020",
    url = "https://drive.google.com/file/d/1v3CiRlWtOVJZry15DQdxzeh98UoNAWbA/view"
}