Markovian Restless Bandits and Index Policies: A Review Articles uri icon

publication date

  • April 2023

issue

  • 7

volume

  • 11

International Standard Serial Number (ISSN)

  • 2227-7390

abstract

  • The restless multi-armed bandit problem is a paradigmatic modeling framework for optimal dynamic priority allocation in stochastic models of wide-ranging applications that has been widely investigated and applied since its inception in a seminal paper by Whittle in the late 1980s. The problem has generated a vast and fast-growing literature from which a significant sample is thematically organized and reviewed in this paper. While the main focus is on priority-index policies due to their intuitive appeal, tractability, asymptotic optimality properties, and often strong empirical performance, other lines of work are also reviewed. Theoretical and algorithmic developments are discussed, along with diverse applications. The main goals are to highlight the remarkable breadth of work that has been carried out on the topic and to stimulate further research in the field.

subjects

  • Statistics

keywords

  • bandit problems; dynamic and stochastic resource allocation; index policies; markov decision processes; online learning; regret analysis; restless bandits