Information from the voice fundamental frequency (F0) region accounts for the majority of the benefit when acoustic stimulation is added to electric stimulation

Ting Zhang, Michael Dorman, Anthony J. Spahr

Research output: Contribution to journalArticle

106 Citations (Scopus)

Abstract

Objectives: The aim of this study was to determine the minimum amount of low-frequency acoustic information that is required to achieve speech perception benefit in listeners with a cochlear implant in one ear and low-frequency hearing in the other ear. Design: The recognition of monosyllabic words in quiet and sentences in noise was evaluated in three listening conditions: electric stimulation alone, acoustic stimulation alone, and combined electric and acoustic stimulation. The acoustic stimuli presented to the nonimplanted ear were either low-pass-filtered at 125, 250, 500, or 750 Hz, or unfiltered (wideband). Results: Adding low-frequency acoustic information to electrically stimulated information led to a significant improvement in word recognition in quiet and sentence recognition in noise. Improvement was observed in the electric and acoustic stimulation condition even when the acoustic information was limited to the 125-Hz-low-passed signal. Further improvement for the sentences in noise was observed when the acoustic signal was increased to wideband. Conclusions: Information from the voice fundamental frequency (F0) region accounts for the majority of the speech perception benefit when acoustic stimulation is added to electric stimulation. We propose that, in quiet, low-frequency acoustic information leads to an improved representation of voicing, which in turn leads to a reduction in word candidates in the lexicon. In noise, the robust representation of voicing allows access to low-frequency acoustic landmarks that mark syllable structure and word boundaries. These landmarks can bootstrap word and sentence recognition.

Original languageEnglish (US)
Pages (from-to)63-69
Number of pages7
JournalEar and Hearing
Volume31
Issue number1
DOIs
StatePublished - Feb 2010

Fingerprint

Acoustic Stimulation
Acoustics
Electric Stimulation
Noise
Ear
Speech Perception
Cochlear Implants
Hearing
Recognition (Psychology)

ASJC Scopus subject areas

  • Otorhinolaryngology
  • Speech and Hearing

Cite this

Information from the voice fundamental frequency (F0) region accounts for the majority of the benefit when acoustic stimulation is added to electric stimulation. / Zhang, Ting; Dorman, Michael; Spahr, Anthony J.

In: Ear and Hearing, Vol. 31, No. 1, 02.2010, p. 63-69.

Research output: Contribution to journalArticle

@article{aea073142282495f9d9ee6d13d580466,
title = "Information from the voice fundamental frequency (F0) region accounts for the majority of the benefit when acoustic stimulation is added to electric stimulation",
abstract = "Objectives: The aim of this study was to determine the minimum amount of low-frequency acoustic information that is required to achieve speech perception benefit in listeners with a cochlear implant in one ear and low-frequency hearing in the other ear. Design: The recognition of monosyllabic words in quiet and sentences in noise was evaluated in three listening conditions: electric stimulation alone, acoustic stimulation alone, and combined electric and acoustic stimulation. The acoustic stimuli presented to the nonimplanted ear were either low-pass-filtered at 125, 250, 500, or 750 Hz, or unfiltered (wideband). Results: Adding low-frequency acoustic information to electrically stimulated information led to a significant improvement in word recognition in quiet and sentence recognition in noise. Improvement was observed in the electric and acoustic stimulation condition even when the acoustic information was limited to the 125-Hz-low-passed signal. Further improvement for the sentences in noise was observed when the acoustic signal was increased to wideband. Conclusions: Information from the voice fundamental frequency (F0) region accounts for the majority of the speech perception benefit when acoustic stimulation is added to electric stimulation. We propose that, in quiet, low-frequency acoustic information leads to an improved representation of voicing, which in turn leads to a reduction in word candidates in the lexicon. In noise, the robust representation of voicing allows access to low-frequency acoustic landmarks that mark syllable structure and word boundaries. These landmarks can bootstrap word and sentence recognition.",
author = "Ting Zhang and Michael Dorman and Spahr, {Anthony J.}",
year = "2010",
month = "2",
doi = "10.1097/AUD.0b013e3181b7190c",
language = "English (US)",
volume = "31",
pages = "63--69",
journal = "Ear and Hearing",
issn = "0196-0202",
publisher = "Lippincott Williams and Wilkins",
number = "1",

}

TY - JOUR

T1 - Information from the voice fundamental frequency (F0) region accounts for the majority of the benefit when acoustic stimulation is added to electric stimulation

AU - Zhang, Ting

AU - Dorman, Michael

AU - Spahr, Anthony J.

PY - 2010/2

Y1 - 2010/2

N2 - Objectives: The aim of this study was to determine the minimum amount of low-frequency acoustic information that is required to achieve speech perception benefit in listeners with a cochlear implant in one ear and low-frequency hearing in the other ear. Design: The recognition of monosyllabic words in quiet and sentences in noise was evaluated in three listening conditions: electric stimulation alone, acoustic stimulation alone, and combined electric and acoustic stimulation. The acoustic stimuli presented to the nonimplanted ear were either low-pass-filtered at 125, 250, 500, or 750 Hz, or unfiltered (wideband). Results: Adding low-frequency acoustic information to electrically stimulated information led to a significant improvement in word recognition in quiet and sentence recognition in noise. Improvement was observed in the electric and acoustic stimulation condition even when the acoustic information was limited to the 125-Hz-low-passed signal. Further improvement for the sentences in noise was observed when the acoustic signal was increased to wideband. Conclusions: Information from the voice fundamental frequency (F0) region accounts for the majority of the speech perception benefit when acoustic stimulation is added to electric stimulation. We propose that, in quiet, low-frequency acoustic information leads to an improved representation of voicing, which in turn leads to a reduction in word candidates in the lexicon. In noise, the robust representation of voicing allows access to low-frequency acoustic landmarks that mark syllable structure and word boundaries. These landmarks can bootstrap word and sentence recognition.

AB - Objectives: The aim of this study was to determine the minimum amount of low-frequency acoustic information that is required to achieve speech perception benefit in listeners with a cochlear implant in one ear and low-frequency hearing in the other ear. Design: The recognition of monosyllabic words in quiet and sentences in noise was evaluated in three listening conditions: electric stimulation alone, acoustic stimulation alone, and combined electric and acoustic stimulation. The acoustic stimuli presented to the nonimplanted ear were either low-pass-filtered at 125, 250, 500, or 750 Hz, or unfiltered (wideband). Results: Adding low-frequency acoustic information to electrically stimulated information led to a significant improvement in word recognition in quiet and sentence recognition in noise. Improvement was observed in the electric and acoustic stimulation condition even when the acoustic information was limited to the 125-Hz-low-passed signal. Further improvement for the sentences in noise was observed when the acoustic signal was increased to wideband. Conclusions: Information from the voice fundamental frequency (F0) region accounts for the majority of the speech perception benefit when acoustic stimulation is added to electric stimulation. We propose that, in quiet, low-frequency acoustic information leads to an improved representation of voicing, which in turn leads to a reduction in word candidates in the lexicon. In noise, the robust representation of voicing allows access to low-frequency acoustic landmarks that mark syllable structure and word boundaries. These landmarks can bootstrap word and sentence recognition.

UR - http://www.scopus.com/inward/record.url?scp=75149185505&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=75149185505&partnerID=8YFLogxK

U2 - 10.1097/AUD.0b013e3181b7190c

DO - 10.1097/AUD.0b013e3181b7190c

M3 - Article

C2 - 20050394

AN - SCOPUS:75149185505

VL - 31

SP - 63

EP - 69

JO - Ear and Hearing

JF - Ear and Hearing

SN - 0196-0202

IS - 1

ER -