MFCC-based Recurrent Neural Network for automatic clinical depression recognition and assessment from speech

Affiliation auteurs!!!! Error affiliation !!!!
TitreMFCC-based Recurrent Neural Network for automatic clinical depression recognition and assessment from speech
Type de publicationJournal Article
Year of Publication2022
AuteursRejaibi E, Komaty A, Meriaudeau F, Agrebi S, Othmani A
JournalBIOMEDICAL SIGNAL PROCESSING AND CONTROL
Volume71
Pagination103107
Date PublishedJAN
Type of ArticleArticle
ISSN1746-8094
Mots-clésBiomedical signal processing, Clinical depression diagnosis, Deep learning, HCI-based healthcare, Speech depression recognition
Résumé

Clinical depression or Major Depressive Disorder (MDD) is a common and serious medical illness. In this paper, a deep Recurrent Neural Network-based framework is presented to detect depression and to predict its severity level from speech. Low-level and high-level audio features are extracted from audio recordings to predict the 24 scores of the Patient Health Questionnaire and the binary class of depression diagnosis. To overcome the problem of the small size of Speech Depression Recognition (SDR) datasets, expanding training labels and transferred features are considered. The proposed approach outperforms the state-of-art approaches on the DAIC-WOZ database with an overall accuracy of 76.27% and a root mean square error of 0.4 in assessing depression, while a root mean square error of 0.168 is achieved in predicting the depression severity levels. The proposed framework has several advantages (fastness, non-invasiveness, and non-intrusion), which makes it convenient for real-time applications. The performances of the proposed approach are evaluated under a multi-modal and a multi-features experiments. MFCC based high-level features hold relevant information related to depression. Yet, adding visual action units and different other acoustic features further boosts the classification results by 20% and 10% to reach an accuracy of 95.6% and 86%, respectively. Considering visual-facial modality needs to be carefully studied as it sparks patient privacy concerns while adding more acoustic features increases the computation time.

DOI10.1016/j.bspc.2021.103107