Lifelong Hyper-Policy Optimization with Multiple Importance Sampling Regularization

Pierre Liotet, Francesco Vidaich, Alberto Maria Metelli, and Marcello Restelli

The Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI), 2022.

Acceptance rate: 1349/9020 (15.0%)
CORE 2021: A*   GGS 2021: A++

Abstract
Learning in a lifelong setting, where the dynamics continually evolve, is a hard challenge for current reinforcement learning algorithms. Yet this would be a much needed feature for practical applications. In this paper, we propose an approach which learns a hyper-policy, whose input is time, that outputs the parameters of the policy to be queried at that time. This hyper-policy is trained to maximize the estimated future performance, efficiently reusing past data by means of importance sampling, at the cost of introducing a controlled bias. We combine the future performance estimate with the past performance to mitigate catastrophic forgetting. To avoid overfitting the collected data, we derive a differentiable variance bound that we embed as a penalization term. Finally, we empirically validate our approach, in comparison with state-of-the-art algorithms, on realistic environments, including water resource management and trading.

[Link] [BibTeX]

 @incollection{liotet2022lifelong,
    author = "Liotet, Pierre and Vidaich, Francesco and Metelli, Alberto Maria and Restelli, Marcello",
    title = "Lifelong Hyper-Policy Optimization with Multiple Importance Sampling Regularization",
    booktitle = "The Thirty-Sixth {AAAI} Conference on Artificial Intelligence ({AAAI})",
    publisher = "{AAAI} Press",
    year = "2022",
    url = "https://doi.org/10.1609/aaai.v36i7.20717",
    pages = "7525--7533"
}