Human Activity Recognition Based on Single Sensor Square HV Acceleration Images and Convolutional Neural Networks Articles uri icon

publication date

  • February 2019

start page

  • 1487

end page

  • 1498

issue

  • 4

volume

  • 19

International Standard Serial Number (ISSN)

  • 1530-437X

Electronic International Standard Serial Number (EISSN)

  • 1558-1748

abstract

  • Human Activity Recognition (HAR) provides the context for many user-centered personal recommender systems in areas such as healthcare, sports, lifelong learning or home automation. Based on different types of sensors (either camera based, environmental sensors or wearable and mobile sensors) user related data provides the basis to extract movement related features from which the activity that the user is performing can be assessed. Among the different types of sensors, wearable sensors provide a user convenient, non-intrusive, always available alternative that has gained special attention for HAR. Wearable sensors will be a relevant part of the Internet of Things. This paper presents a novel mechanism to detect which particular activity a user is performing based on the data from a single tri-axial accelerometer. A Convolutional Neural Network is used in order to automatically extract the most relevant features to characterize acceleration patterns with interactivity discrimination capacity. The user anchored coordinate system generating the data from the accelerometer sensor is transformed into a georeferenced coordinate system in order to estimate the horizontal and vertical acceleration components. A sliding window with 50% overlap is used to extract 5 seconds of acceleration data from which a square horizontal-vertical acceleration image is computed. Both monochrome and colored images are generated either by adding the influence of the time evolution of the acceleration series or not in the generated image. The results for both p-fold cross-validation and leave on out approaches are presented using a public dataset. The results outperform by around 8% of those obtained by the authors of the dataset in the case of using a p-fold cross-validation.

keywords

  • human activity recognition in an internet of things; convolutional neural networks; acceleration images; accelerometer sensors