Fusion of transformed shallow features for facial expression recognition

Affiliation auteurs!!!! Error affiliation !!!!
TitreFusion of transformed shallow features for facial expression recognition
Type de publicationJournal Article
Year of Publication2019
AuteursBougourzi F, Mokrani K, Ruichek Y, Dornaika F, Ouafi A, Taleb-Ahmed A
JournalIET IMAGE PROCESSING
Volume13
Pagination1479-1489
Date PublishedJUL 18
Type of ArticleArticle
ISSN1751-9659
Mots-clésanimation, automatic facial expression recognition systems, basic expressions, cognitive activity, different cross-validation schemes, different descriptors features, emotion recognition, face recognition, feature extraction, fusion method, human affective state, human computer, image representation, important signs, intention, interest year, interesting fields, local phase quantisation, medical applications, oriented gradients, Personality, principal component analysis, raw features concatenation, robot interaction, state-of-the-art methods, static images, statistical analysis, statistical image features, transformed shallow features, video gaming
Résumé

Facial expression conveys important signs about the human affective state, cognitive activity, intention and personality. In fact, the automatic facial expression recognition systems are getting more interest year after year due to its wide range of applications in several interesting fields such as human computer/robot interaction, medical applications, animation and video gaming. In this study, the authors propose to combine between different descriptors features (histogram of oriented gradients, local phase quantisation and binarised statistical image features) after applying principal component analysis on each of them to recognise the six basic expressions and the neutral face from the static images. Their proposed fusion method has been tested on four popular databases which are: JAFFE, MMI, CASIA and CK+, using two different cross-validation schemes: subject independent and leave-one-subject-out. The obtained results show that their method outperforms both the raw features concatenation and state-of-the-art methods.

DOI10.1049/iet-ipr.2018.6235