Graph-powered interpretable machine learning models for abnormality detection in ego-things network Articles uri icon

publication date

  • March 2022

start page

  • 1

end page

  • 26

issue

  • 6, 2260

volume

  • 22

International Standard Serial Number (ISSN)

  • 1424-3210

Electronic International Standard Serial Number (EISSN)

  • 1424-8220

abstract

  • In recent days, it is becoming essential to ensure that the outcomes of signal processing methods based on machine learning (ML) data-driven models can provide interpretable predictions. The interpretability of ML models can be defined as the capability to understand the reasons that contributed to generating a given outcome in a complex autonomous or semi-autonomous system. The necessity of interpretability is often related to the evaluation of performances in complex systems and the acceptance of agents automatization processes where critical high-risk decisions have to be taken. This paper concentrates on one of the core functionality of such systems, i.e., abnormality detection, and on choosing a model representation modality based on a data-driven machine learning (ML) technique such that the outcomes become interpretable. The interpretability in this work is achieved through graph matching of semantic level vocabulary generated from the data and their relationships. The proposed approach assumes that the data-driven models to be chosen should support emergent self-awareness (SA) of the agents at multiple abstraction levels. It is demonstrated that the capability of incrementally updating learned representation models based on progressive experiences of the agent is shown to be strictly related to interpretability capability. As a case study, abnormality detection is analyzed as a primary feature of the collective awareness (CA) of a network of vehicles performing cooperative behaviors. Each vehicle is considered an example of an Internet of Things (IoT) node, therefore providing results that can be generalized to an IoT framework where agents have different sensors, actuators, and tasks to be accomplished. The capability of a model to allow evaluation of abnormalities at different levels of abstraction in the learned models is addressed as a key aspect for interpretability.

subjects

  • Computer Science
  • Industrial Engineering
  • Mechanical Engineering
  • Robotics and Industrial Informatics
  • Telecommunications

keywords

  • self-awareness; collective-awareness; interpretability; markov jump particle filter; dynamic bayesian network; abnormality detection