An approach to forecasting and filtering noise in dynamic systems using LSTM architectures Articles uri icon

publication date

  • August 2022

start page

  • 637

end page

  • 648


  • 500

International Standard Serial Number (ISSN)

  • 0925-2312

Electronic International Standard Serial Number (EISSN)

  • 1872-8286


  • Some of the limitations of state-space models are given by the difficulty of modeling certain systems, the filters convergence time, or the impossibility of modeling dependencies in the long term. Having agile and alternative methodologies that allow the modeling of complex problems but still provide solutions to the classic challenges of estimation or filtering, such as the position estimation of a mobile with noisy measurements and unknown motion models, are of high interest. In this work, we address the problem of position estimation of 1-D dynamic systems from a deep learning paradigm, using Long-Short Term Memory (LSTM) architectures designed to solve problems with long term temporal dependencies, in combination with other recurrent networks. A deep neuronal architecture inspired by the Encoder-Decoder language systems is implemented, remarking its limits and finding a solution capable of making predictions of high accuracy with models learnt from training data of a moving object. We use a panel data model for training and validation. In the experimentation, we use sliding overlapping time windows in a recursive and standardized way to avoid the saturation problem of the networks in increasing trend estimates. The results are finally compared with the optimal values from the Kalman filter, obtaining comparable results in error terms. These results show the proposed system has great potential for target tracking.


  • Computer Science
  • Robotics and Industrial Informatics


  • attention; deep learning; encoder-decoder; filtering; forecasting; lstm