Safety of Human-Artificial Intelligence Systems: Applying Safety Science to Analyze Loopholes in Interactions between Human Organizations, Artificial Intelligence, and Individual People Articles uri icon

publication date

  • May 2024

issue

  • 2

volume

  • 11

Electronic International Standard Serial Number (EISSN)

  • 2227-9709

abstract

  • Loopholes involve misalignments between rules about what should be done and what is actually done in practice. The focus of this paper is loopholes in interactions between human organizations' implementations of task-specific artificial intelligence and individual people. The importance of identifying and addressing loopholes is recognized in safety science and in applications of AI. Here, an examination is provided of loophole sources in interactions between human organizations and individual people. Then, it is explained how the introduction of task-specific AI applications can introduce new sources of loopholes. Next, an analytical framework, which is well-established in safety science, is applied to analyses of loopholes in interactions between human organizations, artificial intelligence, and individual people. The example used in the analysis is human-artificial intelligence systems in gig economy delivery driving work.

subjects

  • Robotics and Industrial Informatics

keywords

  • algebraic machine learning; driving; gig economy; human–artificial intelligence (hai) systems; loopholes; narrow ai; quality management systems; swiss cheese model; theory of active and latent failures