DISCRETE UNCERTAINTY QUANTIFICATION FOR OFFLINE REINFORCEMENT LEARNING Articles uri icon

authors

  • Perez, Jose Luis
  • Corrochano, Javier
  • GARCIA POLO, FRANCISCO JAVIER
  • Majadas, Ruben
  • IbaƱez Llano, Cristina
  • Perez, Sergio
  • FERNANDEZ REBOLLO, FERNANDO

publication date

  • October 2023

issue

  • 4

volume

  • 13

International Standard Serial Number (ISSN)

  • 2083-2567

Electronic International Standard Serial Number (EISSN)

  • 2449-6499

abstract

  • In many Reinforcement Learning (RL) tasks, the classical online interaction of the learning agent with the environment is impractical, either because such interaction is expensive or dangerous. In these cases, previous gathered data can be used, arising what is typically called Offline RL. However, this type of learning faces a large number of challenges, mostly derived from the fact that exploration/exploitation trade-off is overshadowed. In addition, the historical data is usually biased by the way it was obtained, typically, a sub-optimal controller, producing a distributional shift from historical data and the one required to learn the optimal policy. In this paper, we present a novel approach to deal with the uncertainty risen by the absence or sparse presence of some state-action pairs in the learning data. Our approach is based on shaping the reward perceived from the environment to ensure the task is solved. We present the approach and show that combining it with classic online RL methods make them perform as good as state of the art Offline RL algorithms such as CQL and BCQ. Finally, we show that using our method on top of established offline learning algorithms can improve them.

keywords

  • machine learning; off-line reinforcement learning; uncertainty quantification