Video augmentation technique for human action recognition using genetic algorithm Articles uri icon

authors

  • Nida, Nudrat
  • Yousaf, Muhammad Haroon
  • Irtaza, Aun
  • VELASTIN CARROZA, SERGIO ALEJANDRO

publication date

  • April 2022

start page

  • 327

end page

  • 338

issue

  • 2

volume

  • 44

International Standard Serial Number (ISSN)

  • 1225-6463

Electronic International Standard Serial Number (EISSN)

  • 2233-7326

abstract

  • Classification models for human action recognition require robust features and large training sets for good generalization. However, data augmentation methods are employed for imbalanced training sets to achieve higher accuracy. These samples generated using data augmentation only reflect existing samples within the training set, their feature representations are less diverse and hence, contribute to less precise classification. This paper presents new data augmentation and action representation approaches to grow training sets. The proposed approach is based on two fundamental concepts: virtual video generation for augmentation and representation of the action videos through robust features. Virtual videos are generated from the motion history templates of action videos, which are convolved using a convolutional neural network, to generate deep features. Furthermore, by observing an objective function of the genetic algorithm, the spatiotemporal features of different samples are combined, to generate the representations of the virtual videos and then classified through an extreme learning machine classifier on MuHAVi-Uncut, iXMAS, and IAVID-1 datasets.

subjects

  • Computer Science
  • Electronics
  • Robotics and Industrial Informatics

keywords

  • computer vision; evolutionary deep features augmentation; genetic algorithm; human action recognition; video augmentation