An auditory saliency pooling-based LSTM model for speech intelligibility classification Articles uri icon

publication date

  • September 2021

start page

  • 1728

issue

  • 9

volume

  • 13

International Standard Serial Number (ISSN)

  • 2073-8994

abstract

  • Speech intelligibility is a crucial element in oral communication that can be influenced by multiple elements, such as noise, channel characteristics, or speech disorders. In this paper, we address the task of speech intelligibility classification (SIC) in this last circumstance. Taking our previous works, a SIC system based on an attentional long short-term memory (LSTM) network, as a starting point, we deal with the problem of the inadequate learning of the attention weights due to training data scarcity. For overcoming this issue, the main contribution of this paper is a novel type of weighted pooling (WP) mechanism, called saliency pooling where the WP weights are not automatically learned during the training process of the network, but are obtained from an external source of information, the Kalinli"s auditory saliency model. In this way, it is intended to take advantage of the apparent symmetry between the human auditory attention mechanism and the attentional models integrated into deep learning networks. The developed systems are assessed on the UA-speech dataset that comprises speech uttered by subjects with several dysarthria levels. Results show that all the systems with saliency pooling significantly outperform a reference support vector machine (SVM)-based system and LSTM-based systems with mean pooling and attention pooling, suggesting that Kalinli"s saliency can be successfully incorporated into the LSTM architecture as an external cue for the estimation of the speech intelligibility level.

keywords

  • speech intelligibility; lstm; weighted pooling; attention; saliency; auditory saliency model