Explainability can foster trust in artificial intelligence in geoscience Articles uri icon

authors

  • DRAMSCH, JESPER SÖREN
  • KUGLITSCH, MONIQUE M.
  • FERNANDEZ TORRES, MIGUEL ANGEL
  • TORETI, ANDREA
  • ALBAYRAK, RUSTEM ARIF
  • NAVA, LORENZO
  • GHAFFARIAN, SAMAN
  • CHENG, XIMENG
  • MA, JACKIE
  • SAMEK, WOJCIECH
  • VENGUSWAMY, RUDY
  • KOUL, ANIRUDH
  • MUTHUREGUNATHAN, RAGHAVAN
  • ESSENFELDER, ARTHUR HRAST

publication date

  • February 2025

start page

  • 112

end page

  • 114

volume

  • 18

International Standard Serial Number (ISSN)

  • 1752-0894

Electronic International Standard Serial Number (EISSN)

  • 1752-0908

abstract

  • Uptake of explainable artificial intelligence (XAI) methods in geoscience is currently limited. We argue that such methods that reveal the decision processes of AI models can foster trust in their results and facilitate the broader adoption of AI.

subjects

  • Computer Science

keywords

  • natural hazards; scientific community; technology.