Human fall detection using pose estimation: From traditional machine learning to vision transformers Articles uri icon

authors

  • Raza, Ali
  • Yousaf, Muhammad Haroon
  • Ahmad, Waqar
  • VELASTIN CARROZA, SERGIO ALEJANDRO
  • Viriri, Serestina

publication date

  • March 2025

start page

  • 1

end page

  • 18

volume

  • 143

International Standard Serial Number (ISSN)

  • 0952-1976

Electronic International Standard Serial Number (EISSN)

  • 1873-6769

abstract

  • Human activity recognition research for healthcare has drawn global attention in recent era. Recent advancements have led to various approaches capable of detecting diverse movements like walking, running, jumping, and falling. Fall detection is crucial due to its potential fatality, especially for older individuals. Sensors are widely employed to perceive environmental changes, and they can be integrated into wearable devices like phones, necklaces, or wristbands. However, these devices may be uncomfortable or unsuitable for continuous use. Video imagery, in principle, surpasses wearable sensors for fall detection. The proposed method uses video frames to identify falls, reducing the need for environmental sensors. We present an empirical analysis of vision-based human fall detection, employing multiple techniques to estimate human poses including a transformer-based pose estimation technique. These techniques yield foundational features used for training diverse networks, including machine learning classifiers to vision transformers. Our methodology achieves cutting-edge outcomes across the UR-Fall, UP-Fall, and Le2i fall detection datasets.

subjects

  • Education
  • Mathematics
  • Medicine
  • Robotics and Industrial Informatics
  • Statistics

keywords

  • fall detection; human pose estimation; machine learning; deep learning