Learning to Recognize 3D Human Action from A New Skeleton-based Representation Using Deep Convolutional Neural Networks. Articles uri icon

authors

  • Pham, Huy-Hieu
  • Khoudour, Louahdi
  • Crouzil, Alain
  • Zegers, Pablo
  • VELASTIN CARROZA, SERGIO ALEJANDRO

publication date

  • November 2018

start page

  • 319

end page

  • 328

issue

  • 3

volume

  • 13

International Standard Serial Number (ISSN)

  • 1751-9632

Electronic International Standard Serial Number (EISSN)

  • 1751-9640

abstract

  • Recognising human actions in untrimmed videos is an important challenging task. An effective three-dimensional (3D) motion representation and a powerful learning model are two key factors influencing recognition performance. In this study, the authors introduce a new skeleton-based representation for 3D action recognition in videos. The key idea of the proposed representation is to transform 3D joint coordinates of the human body carried in skeleton sequences into RGB images via a colour encoding process. By normalising the 3D joint coordinates and dividing each skeleton frame into five parts, where the joints are concatenated according to the order of their physical connections, the colour-coded representation is able to represent spatio-temporal evolutions of complex 3D motions, independently of the length of each sequence. They then design and train different deep convolutional neural networks based on the residual network architecture on the obtained image-based representations to learn 3D motion features and classify them into classes. Their proposed method is evaluated on two widely used action recognition benchmarks: MSR Action3D and NTU-RGB+D, a very large-scale dataset for 3D human action recognition. The experimental results demonstrate that the proposed method outperforms previous state-of-the-art approaches while requiring less computation for training and prediction.

subjects

  • Computer Science