Talker-identification training using simulations of binaurally combined electric and acoustic hearing: Generalization to speech and emotion recognition

Vidya Krull, Xin Luo, Karen Iler Kirk

Research output: Contribution to journalArticlepeer-review

13 Scopus citations

Abstract

Understanding speech in background noise, talker identification, and vocal emotion recognition are challenging for cochlear implant (CI) users due to poor spectral resolution and limited pitch cues with the CI. Recent studies have shown that bimodal CI users, that is, those CI users who wear a hearing aid (HA) in their non-implanted ear, receive benefit for understanding speech both in quiet and in noise. This study compared the efficacy of talker-identification training in two groups of young normal-hearing adults, listening to either acoustic simulations of unilateral CI or bimodal (CI-HA) hearing. Training resulted in improved identification of talkers for both groups with better overall performance for simulated bimodal hearing. Generalization of learning to sentence and emotion recognition also was assessed in both subject groups. Sentence recognition in quiet and in noise improved for both groups, no matter if the talkers had been heard during training or not. Generalization to improvements in emotion recognition for two unfamiliar talkers also was noted for both groups with the simulated bimodal-hearing group showing better overall emotion-recognition performance. Improvements in sentence recognition were retained a month after training in both groups. These results have potential implications for aural rehabilitation of conventional and bimodal CI users.

Original languageEnglish (US)
Pages (from-to)3069-3078
Number of pages10
JournalJournal of the Acoustical Society of America
Volume131
Issue number4
DOIs
StatePublished - Apr 2012
Externally publishedYes

ASJC Scopus subject areas

  • Arts and Humanities (miscellaneous)
  • Acoustics and Ultrasonics

Fingerprint

Dive into the research topics of 'Talker-identification training using simulations of binaurally combined electric and acoustic hearing: Generalization to speech and emotion recognition'. Together they form a unique fingerprint.

Cite this