My watch list
my.bionity.com  
Login  

Sound localization



Sound localization is a listener's ability to identify the location or origin of a detected sound or the methods in acoustical engineering to simulate the placement of an auditory cue in a virtual 3D space (see binaural recording).

There are two general methods for sound localization, binaural cues and monaural cues.

Additional recommended knowledge

Contents

Binaural cues

Binaural localization relies on the comparison of auditory input from two separate detectors. Therefore, most auditory systems feature two ears, one on each side of the head. The primary biological binaural cue is the split-second delay between the time when sound from a single source reaches the near ear and when it reaches the far ear. This is often technically referred to as the "interaural time difference" (ITD). ITDmax = 0.63 ms. Another binaural cue, less significant in ground dwelling animals, is the reduction in loudness when the sound reaches the far ear, or the "interaural amplitude difference" (IAD) or (ILD) as "interaural level difference". This is also referred to as the frequency dependent "interaural level difference" (ILD) (or "interaural intensity difference" (IID)). Our eardrums are only sensitive to the sound pressure level differences.

Note that these cues will only aid in localizing the sound source's azimuth (the angle between the source and the sagittal plane), not its elevation (the angle between the source and the horizontal plane through both ears), unless the two detectors are positioned at different heights in addition to being separated in the horizontal plane. In animals, however, rough elevation information is gained simply by tilting the head, provided that the sound lasts long enough to complete the movement. This explains the innate behavior of cocking the head to one side when trying to localize a sound precisely. To get instantaneous localization in more than two dimensions from time-difference or amplitude-difference cues requires more than two detectors. However, many animals have quite complex variations in the degree of attenuation of a sound receives in travelling from the source to the eardrum: there are variations in the frequency-dependent attenuation with both azimuthal angle and elevation. These can be summarised in the head-related transfer function, or HRTF. As a result, where the sound is wideband (that is, has its energy spread over the audible spectrum), it is possible for an animal to estimate both angle and elevation simultaneously without tilting its head. Of course, additional information can be found by moving the head, so that the HRTF for both ears changes in a way known (implicitly!) by the animal.

In vertebrates, inter-aural time differences are known to be calculated in the superior olivary nucleus of the brainstem. According to Jeffress[1], this calculation relies on delay lines: neurons in the superior olive which accept innervation from each ear with different connecting axon lengths. Some cells are more directly connected to one ear than the other, thus they are specific for a particular inter-aural time difference. This theory is equivalent to the mathematical procedure of cross-correlation. However, because Jeffress' theory is unable to account for the precedence effect, in which only the first of multiple identical sounds is used to determine the sounds' location (thus avoiding confusion caused by echoes), it cannot be entirely correct, as pointed out by Gaskell[2].

The tiny parasitic fly Ormia ochracea has become a model organism in sound localization experiments because of its unique ear. The animal is too small for the time difference of sound arriving at the two ears to be calculated in the usual way, yet it can determine the direction of sound sources with exquisite precision. The tympanic membranes of opposite ears are directly connected mechanically, allowing resolution of nanosecond time differences[3] [4] and requiring a new neural coding strategy.[5] Ho[6] showed that the coupled-eardrum system in frogs can produce increased interaural vibration disparities when only small arrival time and intensity differences were available to the animal’s head. Efforts to build directional microphones based on the coupled-eardrum structure are underway.

Monaural (filtering) cues

Monaural localization mostly depends on the filtering effects of external structures. In advanced auditory systems, these external filters include the head, shoulders, torso, and outer ear or "pinna", and can be summarized as the head-related transfer function. Sounds are frequency filtered specifically depending on the angle from which they strike the various external filters. The most significant filtering cue for biological sound localization is the pinna notch, a notch filtering effect resulting from destructive interference of waves reflected from the outer ear. The frequency that is selectively notch filtered depends on the angle from which the sound strikes the outer ear. Instantaneous localization of sound source elevation in advanced systems primarily depends on the pinna notch and other head-related filtering. These monaural effects also provide azimuth information, but it is inferior to that gained from binaural cues.

In order to enhance filtering information, many animals have large, specially shaped outer ears. Many also have the ability to turn the outer ear at will, which allows for better sound localization and also better sound detection. Bats and barn owls are paragons of monaural localization in the animal kingdom, and have thus become model organisms.

Processing of head-related transfer functions for biological sound localization occurs in the auditory cortex.

Distance cues

Neither inter-aural time differences nor monaural filtering information provides good distance localization. Distance can theoretically be approximated through inter-aural amplitude differences or by comparing the relative head-related filtering in each ear: a combination of binaural and filtering information. The most direct cue to distance is sound amplitude, which decays with increasing distance. However, this is not a reliable cue, because in general it is not known how strong the sound source is. In case of familiar sounds, such as speech, there is an implicit knowledge of how strong the sound source should be, which enables a rough distance judgment to be made.

In general, humans are best at judging sound source azimuth, then elevation, and worst at judging distance. Source distance is qualitatively obvious to a human observer when a sound is extremely close (the mosquito in the ear effect), or when sound is echoed by large structures in the environment (such as walls and ceiling). Such echoes provide reasonable cues to the distance of a sound source, in particular because the strength of echoes does not depend on the distance of the source, while the strength of the sound that arrives directly from the sound source becomes weaker with distance. As a result, the ratio of direct-to-echo strength alters the quality of the sound in such a way to which humans are sensitive. In this way consistent, although not very accurate, distance judgments are possible. This method generally fails outdoors, due to a lack of echoes. Still, there are a number of outdoor environments that also generate strong, discrete echoes, such as mountains. On the other hand, distance evaluation outdoors is largely based on the received timbre of sound: short soundwaves (high-pitched sounds) die out sooner, due to their relatively smaller kinetic energy, and thus distant sounds appear duller than normal (lacking in treble).

Bi-coordinate sound localization in owls

Most owls are nocturnal or crepuscular birds of prey. Because they hunt at night, they must rely on non-visual senses. Experiments by Roger Payne [7] have shown that owls are sensitive to the sounds made by their prey, not the heat or the smell. In fact, the sound cues are both necessary and sufficient for localization of mice from a distant location where they are perched. For this to work, the owls must be be able to accurately localize both the azimuth and the elevation of the sound source.

ITD and IID

Owls living above ground must be able to determine the necessary angle of descent, i.e. the elevation, in addition to azimuth (horizontal angle to the sound). This bi-coordinate sound localization is accomplished through two binaural cues: the interaural time difference (ITD) and the interaural intensity difference (IID), also known as the interaural level difference (ILD). The ability in owls is unusual: in mammals like humans, which live in a two dimensional world, ITD and IID are redundant cues for azimuth.

ITD occurs whenever the distance from the source of sound to the two ears is different, resulting in differences in the arrival times of the sound at the two ears. When the sound source is directly in front of the owl, there is no ITD, i.e. the ITD is zero. In sound localization, ITDs are used as cues for location in the azimuth. ITD changes systematically with azimuth. Sounds to the right arrive first at the right ear; sounds to the left arrive first at the left ear.

In mammals, there is an intensity difference in sounds at the two ears caused by the sound shadowing effect of the head. But in many species of owls, level differences arise primarily for sounds that are shifted above or below the elevation of the horizonal plane. This is because of the asymmetry in placement of the ear openings in the owl's head, such that sounds from above the owl reach the left ear first and sounds from below reach the right ear. IID is a measure of the difference in the intensity of the sound as it reaches each ear. In many owls, IIDs for high-frequency sounds (higher than 4 or 5 kHz) are the principal cues for locating sound elevation.

Parallel processing pathways in the brain

The axons of the auditory nerve originate from the hair cells of the cochlea in the inner ear. Different sound frequencies are encoded by different fibers of the auditory nerve, arranged along the length of the auditory nerve, but codes for the timing and intensity of the sound are not segregated within the auditory nerve. Instead, the ITD is encoded by phase locking, i.e. firing at or near a particular phase angle of the sinusoidal stimulus sound wave, and the IID is encoded by spike rate. Both parameters are carried by each fiber of the auditory nerve[8].

The fibers of the auditory nerve innervate both cochlear nuclei in the brainstem, the cochlear nucleus magnocellularis and the cochlear nucleus angularis (see figure). The neurons of the nucleus magnocellularis phase-lock, but are fairly insensitive to variations in sound intensity, while the neurons of the nucleus angularis phase-lock poorly, if at all, but are sensitive to variations in sound intensity. These two nuclei are the starting points of two separate but parallel pathways to the inferior colliculus: the pathway from nucleus magnocellularis processes ITDs, and the pathway from nucleus angularis processes IID.

 

In the time pathway, the nucleus laminaris is the first site of binaural convergence. It is here that that the ITD is detected and encoded using neuronal delay lines and coincidence detection, as in the Jeffress model; when phase-locked impulses coming from the left and right ears coincide at a laminaris neuron, the cell fires most strongly. Thus, the nucleus laminaris acts like a delay-line coincidence detector, converting distance traveled to time delay and generating a map of interaural time difference. Neurons from the nucleus laminaris project to the core of the central nucleus of the inferior colliculus and to the anterior lateral lemniscal nucleus.

In the intensity pathway, the posterior lateral lemniscal nucleus is the site of binaural convergence and where IID is processed. Stimulation of the contralateral ear excites and that of the ipsilateral ear inhibits the neurons of the nuclei in each brain hemisphere independently. The degree of excitation and inhibition depends on sound intensity, and the difference between the strength of the inhibitory input and that of the excitatory input determines the rate at which neurons of the lemniscal nucleus fire. Thus, the response of these neurons is a function of the differences in sound intensity between the two ears.

At the lateral shell of the central nucleus of the inferior colliculus, the time and intensity pathways converge. The lateral shell projects to the external nucleus, where each space-specific neuron responds to acoustic stimuli only if the sound originates from a restricted area in space, i.e. the receptive field of that neuron. These neurons respond exclusively to binaural signals containing the same ITD and IID that would be created by a sound source located in the neuron’s receptive field. Thus, their receptive fields arise from the neurons’ tuning to particular combinations of ITD and IID, simultaneously in a narrow range. These space-specific neurons can thus form a map of auditory space in which the positions of receptive fields in space are isomorphically projected onto the anatomical sites of the neurons[9].

Significance of asymmetrical ears for localization of elevation

The ears of many species of owls, including the barn owl (Tyto alba), are asymmetrical. For example, in barn owls, the placement of the two ear flaps (operculi) lying directly in front of the openings to the ear canals is different for each ear. This asymmetry is such that the center of the left ear flap is slightly above a horizontal line passing through the eyes and directed downward, while the center of the right ear flap is slightly below the line and directed upward. In two other species of owls with asymmetrical ears, the saw whet and the long-eared owls, the asymmetry is achieved by very different means: in saw whets, the skull is asymmetrical; in the long-eared owl, the skin structures lying near the ear form asymmetrical entrances to the ear canals, which is achieved by a horizontal membrane. Thus, ear asymmetry seems to have evolved on at least three different occasions among owls. Because owls depend on their sense of hearing for hunting, this convergent evolution in owl ears suggests that asymmetry is important for sound localization in the owl.

Ear asymmetry allows for sound originating from below the eye level to sound louder in the left ear, while sound originating from above the eye level to sound louder in the right ear. Asymmetrical ear placement also causes IID for high frequencies (between 4 kHz and 8 kHz) to vary systematically with elevation, converting IID into a map of elevation. Thus, it is essential for an owl to have the ability to hear high frequencies. Many birds have the neurophysiological machinery to process both ITD and IID, but, because they have small heads and relatively low frequency sensitivity, they use both parameters only for localization in the azimuth. Through evolution, the ability to hear frequencies higher than 3 kHz, the highest frequency of owl flight noise, enabled owls to exploit elevational IIDs, produced by small ear asymmetries that arose by chance, and begun the evolution of more elaborate forms of ear asymmetry[10].

Another demonstration of the importance of ear asymmetry in owls is that, in experiments, owls with symmetrical ears, such as the screech owl (Otus asio) and the great horned owl (Bubo virginianus), could not be trained to located prey in total darkness, whereas owls with asymmetrical ears could be trained[11].

References

  1. ^ Jeffress, L.A., 1948. A place theory of sound localization. Journal of Comparative and Physiological Psychology 41, 35-39.
  2. ^ Gaskell, H., 1983. The precedence effect. Hearing Research 11, 277-303.
  3. ^ Miles RN, Robert D, Hoy RR. Mechanically coupled ears for directional hearing in the parasitoid fly Ormia ochracea. J Acoust Soc Am. 1995 Dec;98(6):3059-70. PMID 8550933 doi:10.1121/1.413830
  4. ^ Robert D, Miles RN, Hoy RR. Directional hearing by mechanical coupling in the parasitoid fly Ormia ochracea. J Comp Physiol [A]. 1996;179(1):29-44. PMID 8965258 doi:10.1007/BF00193432
  5. ^ Mason AC, Oshinsky ML, Hoy RR. Hyperacute directional hearing in a microscale auditory system. Nature. 2001 Apr 5;410(6829):686-90. PMID 11287954 doi:10.1038/35070564
  6. ^ Ho CC, Narins PM. Directionality of the pressure-difference receiver ears in the northern leopard frog, Rana pipiens pipiens. J Comp Physiol [A]. 2006 Apr;192(4):417-29.
  7. ^ Payne, Roger S., 1962. How the Barn Owl Locates Prey by Hearing. The Living Bird, First Annual of the Cornell Laboratory of Ornithology, 151-159.
  8. ^ Zupanc, Gunther K.H. Behavioral Neurobiology: An integrative approach. Oxford University Press, New York: 2004, 142-149.
  9. ^ Knudsen, Eric I and Masakazu Konishi. A Neural Map of Auditory Space in the Owl. Science. Vol. 200, 19 May 1978: 795-797.
  10. ^ Konishi, Masakazu and Susan F. Volman, 1994. Adaptations for bi-coordinate sound localization in owls. Neural Basis of Behavioral Adaptations, 1-9.
  11. ^ Payne, Roger S.. Acoustic Location of Prey by Barn Owls (Tyto alba). J Exp Biol. 1971; 54, 535-573.

See also

  • Echo location
  • Coincidence Detection in Neurobiology
 
This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Sound_localization". A list of authors is available in Wikipedia.
Your browser is not current. Microsoft Internet Explorer 6.0 does not support some functions on Chemie.DE