Abstract

Emotion analysis and recognition has become an interesting topic of research among the computer vision research community. In this paper, we first present the emoF-BVP database of multimodal (face, body gesture, voice and physiological signals) recordings of actors enacting various expressions of emotions. The database consists of audio and video sequences of actors displaying three different intensities of expressions of 23 different emotions along with facial feature tracking, skeletal tracking and the corresponding physiological data. Next, we describe four deep belief network (DBN) models and show that these models generate robust multimodal features for emotion classification in an unsupervised manner. Our experimental results show that the DBN models perform better than the state of the art methods for emotion recognition. Finally, we propose convolutional deep belief network (CDBN) models that learn salient multimodal features of expressions of emotions. Our CDBN models give better recognition accuracies when recognizing low intensity or subtle expressions of emotions when compared to state of the art methods.

Original languageEnglish (US)
Title of host publication2016 IEEE Winter Conference on Applications of Computer Vision, WACV 2016
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781509006410
DOIs
StatePublished - May 23 2016
EventIEEE Winter Conference on Applications of Computer Vision, WACV 2016 - Lake Placid, United States
Duration: Mar 7 2016Mar 10 2016

Publication series

Name2016 IEEE Winter Conference on Applications of Computer Vision, WACV 2016

Other

OtherIEEE Winter Conference on Applications of Computer Vision, WACV 2016
Country/TerritoryUnited States
CityLake Placid
Period3/7/163/10/16

ASJC Scopus subject areas

  • Computer Science Applications
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Multimodal emotion recognition using deep learning architectures'. Together they form a unique fingerprint.

Cite this