improving the accuracy of automatic facial expression recognition in speaking subjects with deep learning

Clicks: 198
ID: 129895
2020
Article Quality & Performance Metrics
Overall Quality Improving Quality
0.0 /100
Combines engagement data with AI-assessed academic quality
AI Quality Assessment
Not analyzed
Abstract
When automatic facial expression recognition is applied to video sequences of speaking subjects, the recognition accuracy has been noted to be lower than with video sequences of still subjects. This effect known as the speaking effect arises during spontaneous conversations, and along with the affective expressions the speech articulation process influences facial configurations. In this work we question whether, aside from facial features, other cues relating to the articulation process would increase emotion recognition accuracy when added in input to a deep neural network model. We develop two neural networks that classify facial expressions in speaking subjects from the RAVDESS dataset, a spatio-temporal CNN and a GRU cell RNN. They are first trained on facial features only, and afterwards both on facial features and articulation related cues extracted from a model trained for lip reading, while varying the number of consecutive frames provided in input as well. We show that using DNNs the addition of features related to articulation increases classification accuracy up to 12%, the increase being greater with more consecutive frames provided in input to the model.
Reference Key
bursic2020appliedimproving Use this key to autocite in the manuscript while using SciMatic Manuscript Manager or Thesis Manager
Authors ;Sathya Bursic;Giuseppe Boccignone;Alfio Ferrara;Alessandro D’Amelio;Raffaella Lanzarotti
Journal cancer immunology, immunotherapy : cii
Year 2020
DOI 10.3390/app10114002
URL
Keywords

Citations

No citations found. To add a citation, contact the admin at info@scimatic.org

No comments yet. Be the first to comment on this article.