Fundamentals of directional hearing

William A. Yost, Raymond H. Dye

Research output: Contribution to journalReview article

18 Scopus citations

Abstract

We use the cues of interaural time and level differences to locate sounds in azimuth, the HRTF aids in vertical localization, and loudness and reflections are additional cues for judging depth. When the HRTFs for a listener and for each point in space are used to filter the signal before it is presented over headphones, listeners often perceive the sound source similarly to the way it is perceived when it was presented in the real world. Otherwise, headphone-delivered stimuli are lateralized inside the head. Our ability to use interaural level differences is probably best for sounds with high frequencies, whereas sounds with either low-frequency energy or sounds with high-frequency energy but with slow temporal modulations may be localized on the basis of interaural time differences. Coincidence detection networks may be one way the auditory system processes the interaural time and level differences as cues for localization. Spatially separating concurrent sound sources aids us in perceptually segregating them, although the advantage of spatial separation for sound source segregation may be small in many situations.

Original languageEnglish (US)
Pages (from-to)321-344
Number of pages24
JournalSeminars in Hearing
Volume18
Issue number4
DOIs
StatePublished - Jan 1 1997
Externally publishedYes

ASJC Scopus subject areas

  • Speech and Hearing

Fingerprint Dive into the research topics of 'Fundamentals of directional hearing'. Together they form a unique fingerprint.

  • Cite this