A taxonomy for similarity metrics between Markov decision processes Articles uri icon

publication date

  • November 2022

start page

  • 4217

end page

  • 4247

issue

  • 11

volume

  • 111

International Standard Serial Number (ISSN)

  • 0885-6125

Electronic International Standard Serial Number (EISSN)

  • 1573-0565

abstract

  • Although the notion of task similarity is potentially interesting in a wide range of areas such as curriculum learning or automated planning, it has mostly been tied to transfer learning. Transfer is based on the idea of reusing the knowledge acquired in the learning of a set of source tasks to a new learning process in a target task, assuming that the target and source tasks are close enough. In recent years, transfer learning has succeeded in making reinforcement learning (RL) algorithms more efficient (e.g., by reducing the number of samples needed to achieve (near-)optimal performance). Transfer in RL is based on the core concept of similarity: whenever the tasks are similar, the transferred knowledge can be reused to solve the target task and significantly improve the learning performance. Therefore, the selection of good metrics to measure these similarities is a critical aspect when building transfer RL algorithms, especially when this knowledge is transferred from simulation to the real world. In the literature, there are many metrics to measure the similarity between MDPs, hence, many definitions of similarity or its complement distance have been considered. In this paper, we propose a categorization of these metrics and analyze the definitions of similarity proposed so far, taking into account such categorization. We also follow this taxonomy to survey the existing literature, as well as suggesting future directions for the construction of new metrics.

subjects

  • Computer Science
  • Industrial Engineering
  • Robotics and Industrial Informatics

keywords

  • markov decision processes; similarity metrics; transfer learning