Effects of minimum stimulation settings for the Med El Tempo+ speech processor on speech understanding

Anthony J. Spahr, Michael Dorman

Research output: Contribution to journalArticlepeer-review

47 Scopus citations

Abstract

Objective: The aim of this study was to assess the effects of variations in the settings for minimum stimulation levels on speech understanding for adult cochlear implant recipients using the Med El Tempo+ speech processor. Design: Fifteen patients served as listeners. The test material included sentences presented at a conversational level in noise (74 dB SPL at +10 dB signal-to-noise ratio), sentences presented at a soft level in a quiet background (54 dB SPL), consonants in "vCv" environment (74 dB SPL re: vowel peaks), and synthetic vowels in "bVt" environment (54 dB SPL re: vowel peaks). The patients' speech processors were programmed with minimum stimulation levels set to behavioral threshold, set to 10% of most comfortable loudness, and set to 0 μA. Results: The level of speech understanding achieved in the behavioral threshold condition was not significantly different from that achieved in either the 10% of most comfortable loudness or 0 μA conditions for any test material. Only 2 of the 15 patients demonstrated performance differences of greater than 10 percentage points between the 0 μA condition and the behavioral threshold condition on more than a single test. Conclusions: Our results demonstrate that there are no grievous consequences, in terms of speech understanding, for setting minimum stimulation levels below behavioral thresholds. The time savings from setting thresholds to 10% of MCL or 0 μA may be especially useful during the initial device fitting.

Original languageEnglish (US)
Pages (from-to)2S-6S
JournalEar and hearing
Volume26
Issue number4 SUPPL.
DOIs
StatePublished - Aug 2005

ASJC Scopus subject areas

  • Otorhinolaryngology
  • Speech and Hearing

Fingerprint

Dive into the research topics of 'Effects of minimum stimulation settings for the Med El Tempo+ speech processor on speech understanding'. Together they form a unique fingerprint.

Cite this