Subgaussian Importance Sampling for Off-Policy Evaluation and Learning

Alberto Maria Metelli, Alessio Russo, and Marcello Restelli

ICML-21 Workshop on Reinforcement Learning Theory, 2021.

Abstract
Importance Sampling (IS) is a widely used building block for a large variety of off-policy estimation and learning algorithms. However, empirical and theoretical studies have progressively shown that vanilla IS leads to poor estimations whenever the behavioral and target policies are too dissimilar. In this paper, we analyze the theoretical properties of the IS estimator by deriving a probabilistic deviation lower bound that formalizes the intuition behind its undesired behavior. Then, we propose a class of IS transformations, based on the notion of power mean, that are able, under certain circumstances, to achieve a subgaussian concentration rate. Differently from existing methods, like weight truncation, our estimator preserves the differentiability in the target distribution.

[Link] [BibTeX]

 @article{metelli2021subgaussianW,
    author = "Metelli, Alberto Maria and Russo, Alessio and Restelli, Marcello",
    title = "Subgaussian Importance Sampling for Off-Policy Evaluation and Learning",
    journal = "ICML-21 Workshop on Reinforcement Learning Theory",
    year = "2021",
    url = "https://lyang36.github.io/icml2021\_rltheory/camera\_ready/7.pdf"
}