Abstract

In this paper, we investigate the effect of transfer of emotion-rich features between source and target networks on classification accuracy and training time in a multimodal setting for vision based emotion recognition. First, we propose emosource-a 6-layer Deep Belief Network (DBN), trained on popular emotion corpora for emotion classification. Second, we propose two 6-layer DBNs - emotarget and emotargetft and study the transfer of emotion features between source and target networks in a layer-by-layer fashion. To the best of our knowledge, this is the first research effort to study the transfer of emotion features layer-by-layer in a multimodal setting.

Original languageEnglish (US)
Title of host publicationConference Record of the 50th Asilomar Conference on Signals, Systems and Computers, ACSSC 2016
PublisherIEEE Computer Society
Pages449-453
Number of pages5
ISBN (Electronic)9781538639542
DOIs
StatePublished - Mar 1 2017
Event50th Asilomar Conference on Signals, Systems and Computers, ACSSC 2016 - Pacific Grove, United States
Duration: Nov 6 2016Nov 9 2016

Other

Other50th Asilomar Conference on Signals, Systems and Computers, ACSSC 2016
CountryUnited States
CityPacific Grove
Period11/6/1611/9/16

ASJC Scopus subject areas

  • Signal Processing
  • Computer Networks and Communications

Fingerprint Dive into the research topics of 'Transfer of multimodal emotion features in deep belief networks'. Together they form a unique fingerprint.

  • Cite this

    Ranganathan, H., Chakraborty, S., & Panchanathan, S. (2017). Transfer of multimodal emotion features in deep belief networks. In Conference Record of the 50th Asilomar Conference on Signals, Systems and Computers, ACSSC 2016 (pp. 449-453). [7869079] IEEE Computer Society. https://doi.org/10.1109/ACSSC.2016.7869079