Electronic International Standard Serial Number (EISSN)
1873-6769
abstract
Deep Reinforcement Learning systems are now a hot topic in Machine Learning for their effectiveness in many complex tasks, but their application in safety-critical domains (e.g., robot control or self-autonomous driving) remains dangerous without mechanism to detect and prevent risk situations. In Deep RL, such risk is mostly in the form of adversarial attacks, which introduce small perturbations to sensor inputs with the aim of changing the network-based decisions and thus cause catastrophic situations. In the light of these dangers, a promising line of research is that of providing these Deep RL algorithms with suitable defenses, especially when deploying in real environments. This paper suggests that this line of research could be greatly improved by the concepts from the existing research field of Safe Reinforcement Learning, which has been postulated as a family of RL algorithms capable of providing defenses against many forms of risks. However, the connections between Safe RL and the design of defenses against adversarial attacks in Deep RL remain largely unexplored. This paper seeks to explore precisely some of these connections. In particular, this paper proposes to reuse some of the concepts from existing Safe RL algorithms to create a novel and effective instance-based defense for the deployment stage of Deep RL policies. The proposed algorithm uses a risk function based on how far a state is from the state space known by the agent, that allows identifying and preventing adversarial situations. The success of the proposed defense has been evaluated in 4 Atari games.