Opening the Black Box: A systematic review on explainable artificial intelligence in remote sensing Articles uri icon

authors

  • HÖHL, ADRIAN
  • OBADIC, IVICA
  • FERNANDEZ TORRES, MIGUEL ANGEL
  • NAJJAR, HIBA
  • BORGES OLIVEIRA, DARIO AUGUSTO
  • AKATA, ZEYNEP
  • DENGEL, ANDREAS
  • ZHU, XIAO XIANG

publication date

  • December 2024

start page

  • 261

end page

  • 304

issue

  • 4

volume

  • 12

International Standard Serial Number (ISSN)

  • 2473-2397

Electronic International Standard Serial Number (EISSN)

  • 2168-6831

abstract

  • In recent years, black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in remote sensing. Despite the potential benefits of uncovering the inner workings of these models with explainable AI, a comprehensive overview summarizing the explainable AI methods used and their objectives, findings, and challenges in remote sensing applications is still missing. In this paper, we address this gap by performing a systematic review to identify the key trends in the field and shed light on novel explainable AI approaches and emerging directions that tackle specific remote sensing challenges. We also reveal the common patterns of explanation interpretation, discuss the extracted scientific insights, and reflect on the approaches used for the evaluation of explainable AI methods. As such, our review provides a complete summary of the state-of-the-art of explainable AI in remote sensing. Further, we give a detailed outlook on the challenges and promising research directions, representing a basis for novel methodological development and a useful starting point for new researchers in the field.

subjects

  • Computer Science
  • Robotics and Industrial Informatics
  • Telecommunications

keywords

  • remote sensing; artificial intelligence; databases; data models; taxonomy; stakeholders; artificial neural networks; uncertainty; training; systematics; closed box; knowledge discovery; knowledge management.