Please use this identifier to cite or link to this item: https://idr.l4.nitk.ac.in/jspui/handle/123456789/12754
Title: Recognition of emotions from video using acoustic and facial features
Authors: Rao, K.S.
Koolagudi, S.G.
Issue Date: 2015
Citation: Signal, Image and Video Processing, 2015, Vol.9, 5, pp.1029-1045
Abstract: In this paper, acoustic and facial features extracted from video are explored for recognizing emotions. The temporal variation of gray values of the pixels within eye and mouth regions is used as a feature to capture the emotion-specific knowledge from the facial expressions. Acoustic features representing spectral and prosodic information are explored for recognizing emotions from the speech signal. Autoassociative neural network models are used to capture the emotion-specific information from acoustic and facial features. The basic objective of this work is to examine the capability of the proposed acoustic and facial features in view of capturing the emotion-specific information. Further, the correlations among the feature sets are analyzed by combining the evidences at different levels. The performance of the emotion recognition system developed using acoustic and facial features is observed to be 85.71 and 88.14 %, respectively. It has been observed that combining the evidences of models developed using acoustic and facial features improved the recognition performance to 93.62 %. The performance of the emotion recognition systems developed using neural network models is compared with hidden Markov models, Gaussian mixture models and support vector machine models. The proposed features and models are evaluated on real-life emotional database, Interactive Emotional Dyadic Motion Capture database, which was recently collected at University of Southern California. 2013, Springer-Verlag London.
URI: http://idr.nitk.ac.in/jspui/handle/123456789/12754
Appears in Collections:1. Journal Articles

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.