V2N Service Scaling with Deep Reinforcement Learning Articles uri icon

authors

  • SHIH-HUAN HSU, CYRIL
  • MARTIN PEREZ, JORGE
  • Papagianni, Chrysa
  • GROSSO, PAOLA

publication date

  • January 2023

start page

  • 1

end page

  • 5

International Standard Serial Number (ISSN)

  • WWWW-0074

abstract

  • The fifth generation (5G) of wireless networks is set out to meet the stringent requirements of vehicular use cases. Edge computing resources can aid in this direction by moving processing closer to end-users, reducing latency. However, given the stochastic nature of traffic loads and availability of physical resources, appropriate auto-scaling mechanisms need to be employed to support cost-efficient and performant services. To this end, we employ Deep Reinforcement Learning (DRL) for vertical scaling in Edge computing to support vehicular-tonetwork communications. We address the problem using Deep Deterministic Policy Gradient (DDPG). As DDPG is a model-free off-policy algorithm for learning continuous actions, we introduce a discretization approach to support discrete scaling actions. Thus we address scalability problems inherent to high-dimensional discrete action spaces. Employing a real-world vehicular trace data set, we show that DDPG outperforms existing solutions, reducing (at minimum) the average number of active CPUs by 23% while increasing the long-term reward by 24%.

subjects

  • Telecommunications

keywords

  • v2n; scaling; drl; ddpg; a2c