Detecting deception from gaze and speech using a multimodal attention LSTM-based framework Articles
Overview
published in
- Applied Sciences-Basel Journal
publication date
- July 2021
start page
- 6393
issue
- 14
volume
- 11
Digital Object Identifier (DOI)
full text
International Standard Serial Number (ISSN)
- 2076-3417
abstract
- The automatic detection of deceptive behaviors has recently attracted the attention of the research community due to the variety of areas where it can play a crucial role, such as security or criminology. This work is focused on the development of an automatic deception detection system based on gaze and speech features. The first contribution of our research on this topic is the use of attention Long Short-Term Memory (LSTM) networks for single-modal systems with frame-level features as input. In the second contribution, we propose a multimodal system that combines the gaze and speech modalities into the LSTM architecture using two different combination strategies: Late Fusion and Attention-Pooling Fusion. The proposed models are evaluated over the Bag-of-Lies dataset, a multimodal database recorded in real conditions. On the one hand, results show that attentional LSTM networks are able to adequately model the gaze and speech feature sequences, outperforming a reference Support Vector Machine (SVM)-based system with compact features. On the other hand, both combination strategies produce better results than the single-modal systems and the multimodal reference system, suggesting that gaze and speech modalities carry complementary information for the task of deception detection that can be effectively exploited by using LSTMs
Classification
subjects
- Telecommunications
keywords
- deception detection; multimodal; gaze; speech; lstm; attention; fusion