A hybrid neural network with multistage feature fusion for detecting heart failure and murmurs from time-frequency representations of phonocardiograms
Articles
Heart failure produces abnormal sounds and murmurs due to weakened cardiac function and turbulent blood flow. This study presents a hybrid neural network model with interior multistage feature fusion to detect heart pathologies using the time-frequency analysis of phonocardiogram (PCG) recordings. The model combines convolutional neural networks with long short-term memory layers in a unique architecture to efficiently capture the spectro-temporal dependencies at multiple cascaded network stages. Moreover, a fusion mechanism is used to aggregate internal features from multiple stages to enhance pattern modeling. We investigated various time-frequency representations of PCG signals to extract relevant features for model training and evaluation. These representations were derived using multiresolution analysis (MRA) via the short-time Fourier transform or the continuous wavelet transform. Additionally, we examined representations obtained through adaptive multiresolution analysis (AMRA) by employing the Hilbert-Huang transform based on empirical mode decomposition, variational mode decomposition, or empirical wavelet transform. The classification performance of the model was evaluated using two separate datasets, showing that the fusion strategy increases the accuracy and that MRA is superior to AMRA, achieving a classification accuracy of 90.20% for the detection of heart murmurs. Compared with MRA, AMRA demonstrated high adaptability, achieving an accuracy of 99.30% in distinguishing five heart valvular conditions.