We use the cues of interaural time and level differences to locate sounds in azimuth, the HRTF aids in vertical localization, and loudness and reflections are additional cues for judging depth. When the HRTFs for a listener and for each point in space are used to filter the signal before it is presented over headphones, listeners often perceive the sound source similarly to the way it is perceived when it was presented in the real world. Otherwise, headphone-delivered stimuli are lateralized inside the head. Our ability to use interaural level differences is probably best for sounds with high frequencies, whereas sounds with either low-frequency energy or sounds with high-frequency energy but with slow temporal modulations may be localized on the basis of interaural time differences. Coincidence detection networks may be one way the auditory system processes the interaural time and level differences as cues for localization. Spatially separating concurrent sound sources aids us in perceptually segregating them, although the advantage of spatial separation for sound source segregation may be small in many situations.
ASJC Scopus subject areas
- Speech and Hearing