Information

Hearing and neurons- do ears have a sampling period?


From what I have read, outer hair cells in the human ear amplify incoming signals and inner hair cells "pick-up" the signals and generate action potential. However, neurons have refractory periods during which they cannot fire again. Does this mean that the human ear has a "sampling period" within which it cannot "pick-up" sounds?


Inner hair cells (IHC) do not fire action potentials themselves. It's the auditory-nerve that synapses with IHC that generates action potentials. The firing rate of the auditory nerve can be as high as few hundred Hz with a refractory period as short as 1 ms or so (depends on the animal).

However, it is important to note that the signal is not sampled at this rate. As you may know, the auditory signal is first transformed to the "frequency" domain through the physical structure of cochlea. Each inner hair cell therefore is roughly only encoding the relative strength of a frequency band. There are presumably inner hair cells in the human cochlea that correspond to a range centered around 18 kHz for example, but neither the neurotransmitters of the corresponding IHC nor the auditory nerve can fire at 18 kHz. Nevertheless, the amplitude modulation at this high frequency is what is transmitted.

Also, thinking of neural firing as a "sampling period" is not always a good analogy. There are debates about this, but it could be that precise timing of action potential carries large amount of information about the stimulus (perhaps not so much so in early auditory system.)

If you want to see some computational modeling work for inner ear, IHC, and auditory nerve, I recommend the Meddis IHC model:

  • C. J. Sumner, E. A. Lopez-Poveda, L. P. O'Mard, R. Meddis. A revised model of the inner-hair cell and auditory-nerve complex, J. Acoust. Soc. Am. 111 (5) 2002

Simple treatment may minimize hearing loss triggered by loud noises

It's well known that exposure to extremely loud noises -- whether it's an explosion, a firecracker or even a concert -- can lead to permanent hearing loss. But knowing how to treat noise-induced hearing loss, which affects about 15 percent of Americans, has largely remained a mystery. That may eventually change, thanks to new research from the Keck School of Medicine of USC, which sheds light on how noise-induced hearing loss happens and shows how a simple injection of a salt- or sugar-based solution into the middle ear may preserve hearing. The results of the study were published today in PNAS.

Deafening sound

To develop a treatment for noise-induced hearing loss, the researchers first had to understand its mechanisms. They built a tool using novel miniature optics to image inside the cochlea, the hearing portion of the inner ear, and exposed mice to a loud noise similar to that of a roadside bomb.

They discovered that two things happen after exposure to a loud noise: sensory hair cells, which are the cells that detect sound and convert it to neural signals, die, and the inner ear fills with excess fluid, leading to the death of neurons.

"That buildup of fluid pressure in the inner ear is something you might notice if you go to a loud concert," says the study's corresponding author John Oghalai, MD, chair and professor of the USC Tina and Rick Caruso Department of Otolaryngology -- Head and Neck Surgery and holder of the Leon J. Tiber and David S. Alpert Chair in Medicine. "When you leave the concert, your ears might feel full and you might have ringing in your ears. We were able to see that this buildup of fluid correlates with neuron loss."

Both neurons and sensory hair cells play critical roles in hearing.

"The death of sensory hair cells leads to hearing loss. But even if some sensory hair cells remain and still work, if they're not connected to a neuron, then the brain won't hear the sound," Oghalai says.

The researchers found that sensory hair cell death occurred immediately after exposure to loud noise and was irreversible. Neuron damage, however, had a delayed onset, opening a window of opportunity for treatment.

A simple solution

The buildup of fluid in the inner ear occurred over a period of a few hours after loud noise exposure and contained high concentrations of potassium. To reverse the effects of the potassium and reduce the fluid buildup, salt- and sugar-based solutions were injected into the middle ear, just through the eardrum, three hours after noise exposure. The researchers found that treatment with these solutions prevented 45-64 percent of neuron loss, suggesting that the treatment may offer a way to preserve hearing function.

The treatment could have several potential applications, Oghalai explains.

"I can envision soldiers carrying a small bottle of this solution with them and using it to prevent hearing damage after exposure to blast pressure from a roadside bomb," he says. "It might also have potential as a treatment for other diseases of the inner ear that are associated with fluid buildup, such as Meniere's disease."

Oghalai and his team plan to conduct further research on the exact sequence of steps between fluid buildup in the inner ear and neuron death, followed by clinical trials of their potential treatment for noise-induced hearing loss.


Development of Auditory and Vestibular Systems

Development of Auditory and Vestibular Systems fourth edition presents a global and synthetic view of the main aspects of the development of the stato-acoustic system. Unique to this volume is the joint discussion of two sensory systems that, although close at the embryological stage, present divergences during development and later reveal conspicuous functional differences at the adult stage. This work covers the development of auditory receptors up to the central auditory system from several animal models, including humans. Coverage of the vestibular system, spanning amphibians to effects of altered gravity during development in different species, offers examples of the diversity and complexity of life at all levels, from genes through anatomical form and function to, ultimately, behavior.

The new edition of Development of Auditory and Vestibular Systems will continue to be an indispensable resource for beginning scientists in this area and experienced researchers alike.

Development of Auditory and Vestibular Systems fourth edition presents a global and synthetic view of the main aspects of the development of the stato-acoustic system. Unique to this volume is the joint discussion of two sensory systems that, although close at the embryological stage, present divergences during development and later reveal conspicuous functional differences at the adult stage. This work covers the development of auditory receptors up to the central auditory system from several animal models, including humans. Coverage of the vestibular system, spanning amphibians to effects of altered gravity during development in different species, offers examples of the diversity and complexity of life at all levels, from genes through anatomical form and function to, ultimately, behavior.

The new edition of Development of Auditory and Vestibular Systems will continue to be an indispensable resource for beginning scientists in this area and experienced researchers alike.


Can Hearing Be Restored by Making the Brain More Childlike?

You can&rsquot teach an old dog new tricks&mdashor can you? Textbooks tell us that early infancy offers a narrow window of opportunity during which sensory experience shapes the way neuronal circuits wire up to process sound and other inputs. A lack of proper stimulation during this &ldquocritical period&rdquo has a permanent and detrimental effect on brain development.

But new research shows the auditory system in the adult mouse brain can be induced to revert to an immature state similar to that in early infancy, improving the animals&rsquo ability to learn new sounds. The findings, published Thursday in Science, suggest potential new ways of restoring brain function in human patients with neurological diseases&mdashand of improving adults&rsquo ability to learn languages and musical instruments.

In mice, a critical period occurs during which neurons in a portion of the brain&rsquos wrinkled outer surface, the cortex, are highly sensitized to processing sound. This state of plasticity allows them to strengthen certain connections within brain circuits, fine-tuning their auditory responses and enhancing their ability to discriminate between different tones. In humans, a comparable critical period may mark the beginning of language acquisition. But heightened plasticity declines rapidly, and this continues throughout life, making it increasingly difficult to learn.

In 2011 Jay Blundon, a developmental neurobiologist at Saint Jude Children's Research Hospital, and his colleagues reported that the critical periods for circuits connecting the auditory cortex and the thalamus occur at about the same time. (The thalamus relays information from the sense organs to the appropriate cortical area). These developmental windows seem to be controlled by a molecule called adenosine, levels of which rise after the critical period closes. This inhibits communication between cells in the two regions.

In the latest study the researchers wanted to determine if halting adenosine signaling would reinstate plasticity in the auditory cortex. In one set of experiments they used microelectrodes to measure how neurons in the auditory cortex of healthy adult mice responded to pure tones. Responses were compared with those from animals genetically engineered to lack the cell surface receptor to which adenosine binds. The analysis revealed cells in the auditory cortex of mice lacking the adenosine receptor responded to a larger range of frequencies than those of the wild-type mice.

To investigate further, Blundon and his colleagues inhibited adenosine signaling in several other ways. They engineered their own strain of mice, whose adenosine receptors could be deleted from thalamus cells when the mice reached maturity. The thalamus sends fibers to the area of the cortex where sounds are processed. This genetic engineering expanded the frequency of sounds to which cells in the auditory cortex were responsive, improving the mice&rsquos perception of the sounds and improving their ability to discriminate between similar tones. Blocking adenosine signaling with a drug had the same effect on healthy mice. &ldquoThis [signaling] mechanism, if blocked, is sufficient to extend critical plasticity to late adulthood,&rdquo says developmental neurobiologist Stanislav Zakharenko, senior author of the study, adding that the findings could help to make language learning in adults more efficient. &ldquoLearning a language is very easy for two- to three-year-olds, but language learning courses for adults aren&rsquot very effective, even though adults are still capable of learning other skills effectively,&rdquo he says. &ldquoBut if I take a take a language course while inhibiting adenosine production or signaling in the thalamus, I would acquire the information quicker, retain it for longer and maybe lose my accent.&rdquo

The researchers also believe their findings could offer a new treatment for conditions such as stroke as well as tinnitus, or ringing in the ears. They think drug that blocks adenosine signaling could circumvent stroke damage to the auditory cortex by restoring plasticity, and by &ldquoretraining&rdquo healthy surrounding areas to respond to sounds normally processed by the damaged areas.

Not everyone is convinced by the results, however. &ldquoThe range of techniques deployed in this work is very impressive,&rdquo says neuroscientist Jennifer Linden of the Ear Institute at University College London who was not part of the study. &ldquo[But] I am skeptical about the conclusion that disrupting adenosine signaling in the thalamus rejuvenates plasticity in the auditory cortex and improves auditory perception.&rdquo She says interfering with adenosine signaling could alter the excitability of neurons in the thalamus. That would make it unclear whether the observed changes in plasticity and hearing arise specifically from inhibited adenosine signaling, rather than because of altered activity of another internal cell pathway in the thalamus.

Investigating this question is important because altering the excitability of thalamus neurons might impair auditory perception, Linden adds. &ldquoDecades of research have linked increased excitability in auditory brain structures with phantom sound perception in tinnitus,&rdquo she says. &ldquoIf disrupting adenosine signaling produces improvements in cortical plasticity but also hearing problems such as tinnitus, then it would not be an effective means of improving auditory perception in humans.&rdquo

But Blundon and his colleagues stand by their assertions. Increased adenosine, he says, reduces the release of the signaling molecule glutamate in the thalamus, which in turn diminishes the activity of neurons in the auditory cortex. &ldquoThat decrease in adenosine signaling is sufficient to restore cortical plasticity in adults, while activation of adenosine receptors in juveniles is sufficient to block it,&rdquo Blundon explains. &ldquoWe do not claim that other processes that could conceivably alter thalamic excitability have no role in the presence or absence of adult auditory cortex plasticity or changes in auditory perception&mdashthough we know of no such processes that have been described, and have ruled some out in our previous publications.&rdquo


Discussion

Compared with hereditary deafness caused by genetic mutations, acquired hearing loss caused by noise, ototoxic drugs, infection, and aging is more common in the clinic. Previous studies demonstrated that aminoglycosides promote the formation of reactive oxygen species (ROS) that in turn induce apoptotic-like cell death via inhibition of the biosynthesis of mitochondrial proteins [5, 6, 16]. Numerous antioxidants and ROS scavengers have been used in clinical trials to attenuate ototoxicity [52] due to the key role of ROS in aminoglycoside-induced ototoxicity, and interfering with cell death signaling pathways promotes acute hair cell survival and attenuates drug-induced hearing loss following chronic aminoglycoside dosing [53]. However, the therapeutic effect of these strategies was not completely satisfactory. Omi/HtrA2 has been proposed to enhance caspase activation via multiple pathways other than cytosolic translocation [54], and Blink et al. [55] reported that Omi/HtrA2 can induce apoptosis without mitochondrial release. Therefore, we hypothesized that targeting the Htra2 gene might protect hair cells from the ototoxicity caused by aminoglycosides.

There have been some successful studies showing hearing improvement through the traditional strategy of gene overexpression in mouse models with gene defects of Ush1c, Otof, Tmc1, Vglut3, and Kcnq1 [56,57,58,59,60,61,62]. Transfer of the therapeutic wild-type genes offers a potential treatment strategy for recessive genetic diseases, but this approach may require repeated administration to obtain a lifetime therapeutic effect, which increases the risk of inducing side effects, including clinically relevant immunogenicity [63]. The CRISPR/Cas9 technology can specifically edit target genes and offers promising alternatives for diseases such as Duchenne muscular dystrophy [64], metabolopathies [65], and deafness [33], providing a potential one-time treatment strategy for genetic diseases. Theoretically, after editing the pathogenic genes, the prevention or treatment of disease can be achieved permanently compared to the strategies of overexpression or RNA interference. Using the CRISPR/Cas9 technology, Bence et al. [34] selectively and efficiently disrupted the mutant Tmc1 allele in Beethoven mice, and this prevented deafness in Beethoven mice up to 1 year post-injection.

In the current study, we used two CRISPR/Cas9 systems, SpCas9 and SaCas9, to knock out the Htra2 gene, and both systems reached the goal of hearing protection to some extent. It is exceedingly difficult to package SpCas9 into a single AAV2/Anc80L65 vector due to its limited packing capacity, but we successfully constructed Anc80L65–SpCas9 system with the split-SpCas9 scheme in the field of gene therapy for hearing loss. However, the delivery of the SpCas9 system using three AAV vectors might have affected the transduction efficiency of the whole therapeutic system in hair cells, which may be one of the factors leading to lower editing efficiency compared with the SaCas9 system. Moreover, the different promoters and viral doses used for the SpCas9 and SaCas9 system might be other key confounders leading to different therapeutic effects between the two systems.

The CRISPR/Cas9 technology was used to knock out the Htra2 gene in the inner ear of mice both in vitro and in vivo, and we observed a protective effect of the CRISPR/Cas9–Htra2 system on cochlear hair cells against neomycin-induced ototoxicity. The injected ears showed 10 to 30 dB improvement of ABR thresholds in neomycin-treated mice compared with the non-injected ears at 8 kHz for the SaCas9 system. Aminoglycosides are one of the most common causes of acquired SNHL in the clinic and usually lead to profound deafness. A similar corresponding improvement of ABR thresholds would help patients with extensive aminoglycoside-induced hearing loss detect high-decibel sound in the environment and avoid the surgery of cochlear implantation. These patients could acquire better auditory experience with hearing aids.

Although the auditory function was improved significantly, the ABR thresholds did not recover to normal, especially in the high frequencies. We conjecture that the protection of hair cells from the whole cochlea with AAV–CRISPR/SpCas9 system in vitro was due to high transduction efficiency of Anc80L65 in IHCs and OHCs. A satisfactory viral dose can be achieved in in vitro culture system while the viral dose will decrease sharply on account of the limited injection volume of virus into the inner ear. A previous study demonstrates that the volume of the endolymphatic space of adult mice is approximately 0.78 μl [66]. Our data demonstrate that Anc80L65-EGFP transduced 100% of the IHCs and about 90% of the OHCs with a viral dose of 5 × 10 9 VG after in vivo injection. However, for the therapeutic system, the actual viral dose per cochlea for Anc80L65–SpCas9, Anc80L65–Htra2 gRNA, and Anc80L65–SaCas9–Htra2 gRNA was 1.8 × 10 9 VG, 0.9 × 10 9 VG, and 2.7 × 10 9 VG, respectively. The lower viral dose will result in lower transduction efficiency in OHCs [67]. In the in vivo experiment, no obvious protective effect was observed in OHCs of the basal turn of the cochlea, which is consistent with the lower transduction efficiency of Anc80L65 in OHCs [68, 69] and the susceptibility of basal-turn OHCs to aminoglycoside-induced ototoxicity. These data suggest that there is an urgent need for improved transduction efficiency of viral vectors in cochlear OHCs and advanced production technique aimed at yielding high-titer virus suitable for clinical use. In addition, improvement in editing efficiency, which could generate better protective effect, is another critical issue to be addressed. With the development of genome-editing strategy, it is hopeful that enhanced editing efficiency would be achieved in mammalian non-proliferating cells in the future.

In this study, no mice developed any behavioral signs of vestibular damage after neomycin exposure in vivo. We assume that the neomycin damage pattern used in our study was insufficient to cause vestibular dysfunction in mice. At P11, mice received a daily subcutaneous injection of neomycin (200 mg/kg) continuously for 7 days. Within the observation period, neomycin-exposed mice showed no obvious symptoms of postural asymmetries, including head deviation, trunk curvature, forelimb extension, circling compulsory movements, and head nystagmus. Moreover, Tsuji et al. [70] performed a quantitative assessment of vestibular hair cells and Scarpa’s ganglion cells in temporal bones from patients who had aminoglycoside ototoxicity (streptomycin, kanamycin, and neomycin). For neomycin, the data in their study suggested that there was little hair cell ototoxic effect in the vestibular sense organs. However, it was reported that the severity of vestibular damage was in the order of streptomycin, gentamicin, amikacin, and netilmicin in Guinea pigs [71]. Therefore, it is meaningful to investigate whether the AAV–CRISPR/Cas9 strategy protects the vestibular system from aminoglycoside-induced ototoxicity in the future work.

The protective effect of the AAV–CRISPR/Cas9 system in vivo was sustained up to 8 weeks after neomycin exposure, and this system was proved to be safe for the auditory function of wild-type mice within the observation period. However, to realize the clinical translation, there is a need for evaluating longer-term safety of this AAV–CRISPR/Cas9 strategy and identifying the therapeutic time-window in adult animals.

Omi/HtrA2 is a proapoptotic mitochondrial serine protease that is released into the cytoplasm following apoptotic insult [72]. Wang et al. observed that increased expression of Omi/HtrA2 in aging rats augmented myocardial ischemia/reperfusion injury by stimulating myocardial apoptosis [73], which suggests that strategies to inhibit Omi/HtrA2 may protect against heart injury. However, a neurodegenerative phenotype with parkinsonian features has been defined in Omi/HtrA2 knockout mice [74], and Strauss et al. performed a mutation screening of the Omi/HtrA2 gene in German Parkinson’s disease (PD) patients and identified a heterozygous G399S mutation in four (4/518) patients [75], which indicates that loss of function of Omi/HtrA2 in the central nervous system (CNS) may be linked to PD. The pathological hallmark of PD is depigmentation of the substantia nigra and locus coeruleus with neuronal loss in the pars compacta of the substantia nigra [76]. Landegger et al. has demonstrated that Anc80L65-eGFP injected via round window membrane transduced Purkinje neurons in the cerebellum [38] which is not the affected brain area in PD. In our work, the AAV–CRISPR/Cas9 system was injected into the scala media of the cochlea. As safety assessment is a particularly important aspect of gene therapy, it is necessary to examine whether the therapeutic system enters into the CNS and which encephalic regions and types of neurons are transduced with such injection route in future study. Together, to take a further step towards clinical translation of this CRISPR/Cas9-based strategy, it requires further investigation to evaluate the potential influence of this treatment on the CNS.


Reception of Sound

In mammals, sound waves are collected by the external, cartilaginous part of the ear called the pinna, then travel through the auditory canal and cause vibration of the thin diaphragm called the tympanum or ear drum, the innermost part of the outer ear (illustrated in Figure 17.13). Interior to the tympanum is the middle ear. The middle ear holds three small bones called the ossicles, which transfer energy from the moving tympanum to the inner ear. The three ossicles are the malleus (also known as the hammer), the incus (the anvil), and stapes (the stirrup). The aptly named stapes looks very much like a stirrup. The three ossicles are unique to mammals, and each plays a role in hearing. The malleus attaches at three points to the interior surface of the tympanic membrane. The incus attaches the malleus to the stapes. In humans, the stapes is not long enough to reach the tympanum. If we did not have the malleus and the incus, then the vibrations of the tympanum would never reach the inner ear. These bones also function to collect force and amplify sounds. The ear ossicles are homologous to bones in a fish mouth: the bones that support gills in fish are thought to be adapted for use in the vertebrate ear over evolutionary time. Many animals (frogs, reptiles, and birds, for example) use the stapes of the middle ear to transmit vibrations to the middle ear.

Figure 17.13. Sound travels through the outer ear to the middle ear, which is bounded on its exterior by the tympanic membrane. The middle ear contains three bones called ossicles that transfer the sound wave to the oval window, the exterior boundary of the inner ear. The organ of Corti, which is the organ of sound transduction, lies inside the cochlea. (credit: modification of work by Lars Chittka, Axel Brockmann)


INTRODUCTION

Crocodilians have sensitive tympanic ears (Higgs et al., 2002 Wever, 1971) and behavioral observations support their ability to localize sound (Garrick and Lang, 1977). They are the most vocal of the non-avian reptiles and have a sophisticated repertoire of auditory signals (Burghardt, 1977). Juveniles of several species [including Alligator mississippiensis (Daudin 1802)] perform both low frequency grunts and higher frequency distress calls (Burghardt, 1977), which reliably attract adults (reviewed by Garrick and Lang, 1977). Vocal communication is thought to be important for maternal care and promoting group cohesiveness among the young (Pooley, 1969 Pooley, 1977 Campbell, 1973 Garrick and Lang, 1977 Hunt and Watanabe, 1982 Passek and Gillingham, 1999) and part of courtship in adults (Garrick and Lang, 1977).

Like their sister group, birds, crocodilians exhibit a brainstem circuit based on delay lines and coincidence detection for sound localization in the nucleus laminaris (Carr et al., 2009). Further, electrophysiological recordings from the brainstem nucleus laminaris (Carr et al., 2009) reveal a greater range of sensitivity to the interaural time difference (ITD) cue to sound source location than would be expected based on head size, suggesting that these animals may have a unique adaptation for spatial hearing. The experiments in this study examine whether the physiologically recorded range of ITD sensitivity could be the result of the interaural connections of the crocodilian ear.

Internal structures, specifically the acoustic coupling of eardrums (i.e. the transmission of sound from one eardrum to another through internal pathways), can enhance directionality in a frequency-dependent manner (for review, see Christensen-Dalsgaard, 2005). In crocodilians, a pathway connecting the middle ear cavities has been discussed (Colbert, 1946 Owen, 1850 Wever and Vernon, 1957). Results from CT imaging (Witmer and Ridgely, 2008 Witmer et al., 2008) suggested a direct pathway, but the imaging was not carried out at sufficiently high resolution to exclude the presence of membranous barriers. We present anatomical data to confirm the presence of pathways connecting the middle ear cavities, further supporting the hypothesis that acoustic coupling of eardrums may be important for crocodilian sound localization.

Acoustic coupling can (depending on pathway properties and frequency) allow the ear to act as a pressure difference receiver (PDR). First described in insects (Autrum, 1940), PDRs, or internally coupled ears, have been observed in lizards (e.g. Christensen-Dalsgaard and Manley, 2005), frogs (e.g. Feng, 1980 Feng and Shofner, 1981 Jørgensen et al., 1991 Pinder and Palmer, 1983) and birds (Calford and Piddington, 1988 Hill et al., 1980 Hyson et al., 1994 Larsen et al., 2006 Pettigrew and Larsen, 1990 Rosowski and Saunders, 1980) (for reviews, see Christensen-Dalsgaard, 2005 Christensen-Dalsgaard, 2011 Grothe et al., 2010 Klump, 2000). In most cases, acoustic coupling is achieved through either an interaural canal or large permanently open Eustachian tubes. Acoustical coupling produces directional responses at the tympanum as sound reaches both the external side of the tympanic membrane and, once filtered by the head and internal structures, the internal side of the tympanic membrane. Eardrum motion is driven by the instantaneous difference in pressure between the sound component on the external and internal side of the membrane (Feng and Christensen-Dalsgaard, 2008) (for models, see Fletcher and Thwaites, 1979 Pinder and Palmer, 1983 Vossen et al., 2010), and the greatest directional effect is thus produced at frequencies where the amplitudes of the internal and external sound components are equal, so their direction-dependent phase changes can produce large differences in eardrum motion.

Even with strong coupling of the eardrums, this condition is only met in a certain frequency range, depending on the acoustics of the ear. At frequencies below this range the phase differences between internal and external components cancel eardrum motion (Pinder and Palmer, 1983),

List of abbreviations

auditory brainstem response

compound action potential

directional transfer function

head-related transfer function

interaural level difference

interaural timing difference

pressure difference receiver

while at high frequencies acoustical coupling is reduced. Thus, when frequency increases, the ear becomes more of a pressure receiver, affected by only the pressure wave hitting the external tympanic surface (Moiseff and Konishi, 1981 Pinder and Palmer, 1983). More precisely, the motion of the tympanic membrane is due to a combination of the mechanical resonator properties of the eardrum and the acoustic resonator properties of the internal skull pathways (Pinder and Palmer, 1983).

In pressure receiver ears, such as the typical mammalian ear, the three primary cues for sound localization are ITDs [difference in the timing of sound between the ears], interaural level differences [ILDs difference in the sound pressure level (SPL) between the ears] and monaural spectral shape cues [frequency-specific changes in SPL gain generated by differential refraction and reflection patterns off the head and pinnae]. ILDs are most relevant for higher frequency sound and ITDs for lower frequency sound (Hafter, 1984 Macpherson and Middlebrooks, 2002). Larger head size or lower frequencies (up to a limit) increase the available range of ITDs (Kuhn, 1977 Tollin and Koka, 2009b). The acoustic shadow of the head, as well as external structures, such as pinnae (Carlile and King, 1994 Carlile and Pettigrew, 1987 Guppy and Coles, 1988 Koka et al., 2011 Koka et al., 2008 Tollin and Koka, 2009a) or a facial ruff (von Campenhausen and Wagner, 2006 Hausmann et al., 2009), can generate ILD cues, and may also affect ITD and monaural spectral shape cues (Koka et al., 2011). However, acoustically coupled ears as described above can lead to ILD and ITD that are larger than expected from head size (Christensen-Dalsgaard and Manley, 2008 Christensen-Dalsgaard et al., 2011b). Cochlear microphonic recordings from a number of birds reveal interaural delays at high frequencies close to those expected from the path length around the head, while delays measured at low frequencies can approach more than three times this expectation (Calford and Piddington, 1988 Hyson et al., 1994 Köppl, 2009 Rosowski and Saunders, 1980 Wagner et al., 2009).

As part of this study, we measured the transformation of sound waves around the head of juvenile alligators to determine what acoustic cues are available within the animal's hearing range. As the alligators lack a specialized external structure for generating cues, the results should be similar to spherical head model predictions (Duda and Martens, 1998). Because of its ecological relevance, measurements were also made at the water surface to determine whether localization cues were enhanced or degraded.

In addition to passive acoustical and anatomical experiments, two additional sets of experiments, auditory brainstem response (ABR) and laser vibrometry were carried out to test for directionality in the periphery. Peripheral encoding of directionality, expected from a PDR mechanism, was examined at the level of both eardrum movement and the auditory nerve. Laser vibrometry was used to directly measure the directionality encoded in eardrum movement and the transmission and phase gain afforded by interaural coupling, while ABR measurements were used to examine directional sensitivity in auditory nerve activity. As peak 1 of the ABR is the far-field representation of the negatively oriented peak of the compound action potential (CAP N1) of the auditory nerve (Jewett et al., 1970 Köppl and Gleich, 2007), threshold changes with speaker position test the directional sensitivity of auditory nerve activity.

Together, these experiments provide data to support the role of a PDR mechanism in alligator sound localization. They further allow comparisons with data from birds and fossil dinosaurs and suggest the PDR mechanism is an archosaur synapomorphy.


Related Links

References: A gradient of Bmp7 specifies the tonotopic axis in the developing inner ear. Mann ZF, Thiede BR, Chang W, Shin JB, May-Simera HL, Lovett M, Corwin JT, Kelley MW. Nat Commun. 2014 May 205:3839. doi: 10.1038/ncomms4839. PMID: 24845721. Retinoic acid signalling regulates the development of tonotopically patterned hair cells in the chicken cochlea. Thiede BR, Mann ZF, Chang W, Ku YC, Son YK, Lovett M, Kelley MW, Corwin JT. Nat Commun. 2014 May 205:3840. doi: 10.1038/ncomms4840. PMID: 24845860.

Funding: NIH’s National Institute on Deafness and Other Communication Disorders (NIDCD) and the American Hearing Research Foundation.


Aging changes in the senses

As you age, the way your senses (hearing, vision, taste, smell, touch) give you information about the world changes. Your senses become less sharp, and this can make it harder for you to notice details.

Sensory changes can affect your lifestyle. You may have problems communicating, enjoying activities, and staying involved with people. Sensory changes can lead to isolation.

Your senses receive information from your environment. This information can be in the form of sound, light, smells, tastes, and touch. Sensory information is converted into nerve signals that are carried to the brain. There, the signals are turned into meaningful sensations.

A certain amount of stimulation is required before you become aware of a sensation. This minimum level of sensation is called the threshold. Aging raises this threshold. You need more stimulation to be aware of the sensation.

Aging can affect all of the senses, but usually hearing and vision are most affected. Devices such as glasses and hearing aids, or lifestyle changes can improve your ability to hear and see.

Your ears have two jobs. One is hearing and the other is maintaining balance. Hearing occurs after sound vibrations cross the eardrum to the inner ear. The vibrations are changed into nerve signals in the inner ear and are carried to the brain by the auditory nerve.

Balance (equilibrium) is controlled in the inner ear. Fluid and small hair in the inner ear stimulate the auditory nerve. This helps the brain maintain balance.

As you age, structures inside the ear start to change and their functions decline. Your ability to pick up sounds decreases. You may also have problems maintaining your balance as you sit, stand, and walk.

Age-related hearing loss is called presbycusis. It affects both ears. Hearing, usually the ability to hear high-frequency sounds, may decline. You may also have trouble telling the difference between certain sounds. Or, you may have problems hearing a conversation when there is background noise. If you are having trouble hearing, discuss your symptoms with your health care provider. One way to manage hearing loss is by getting fitted with hearing aids.

Persistent, abnormal ear noise (tinnitus) is another common problem in older adults. Causes of tinnitus may include wax buildup, medicines that damage structures inside the ear or mild hearing loss. If you have tinnitus, ask your provider how to manage the condition.

Impacted ear wax can also cause trouble hearing and is common with age. Your provider can remove impacted ear wax.

Vision occurs when light is processed by your eye and interpreted by your brain. Light passes through the transparent eye surface (cornea). It continues through the pupil, the opening to the inside of the eye. The pupil becomes larger or smaller to control the amount of light that enters the eye. The colored part of the eye is called the iris. It is a muscle that controls pupil size. After light passes through your pupil, it reaches the lens. The lens focuses light on your retina (the back of the eye). The retina converts light energy into a nerve signal that the optic nerve carries to the brain, where it is interpreted.

All of the eye structures change with aging. The cornea becomes less sensitive, so you might not notice eye injuries. By the time you turn 60, your pupils may decrease to about one third of the size they were when you were 20. The pupils may react more slowly in response to darkness or bright light. The lens becomes yellowed, less flexible, and slightly cloudy. The fat pads supporting the eyes decrease and the eyes sink into their sockets. The eye muscles become less able to fully rotate the eye.

As you age, the sharpness of your vision (visual acuity) gradually declines. The most common problem is difficulty focusing the eyes on close-up objects. This condition is called presbyopia. Reading glasses, bifocal glasses, or contact lenses can help correct presbyopia.

You may be less able to tolerate glare. For example, glare from a shiny floor in a sunlit room can make it difficult to get around indoors. You may have trouble adapting to darkness or bright light. Problems with glare, brightness, and darkness may make you give up driving at night.

As you age, it gets harder to tell blues from greens than it is to tell reds from yellows. Using warm contrasting colors (yellow, orange, and red) in your home can improve your ability to see. Keeping a red light on in darkened rooms, such as the hallway or bathroom, makes it easier to see than using a regular night light.

With aging, the gel-like substance (vitreous) inside your eye starts to shrink. This can create small particles called floaters in your field of vision. In most cases, floaters do not reduce your vision. But if you develop floaters suddenly or have a rapid increase in the number of floaters, you should have your eyes checked by a professional.

Reduced peripheral vision (side vision) is common in older people. This can limit your activity and ability to interact with others. It may be hard to communicate with people sitting next to you because you cannot see them well. Driving can become dangerous.

Weakened eye muscles may prevent you from moving your eyes in all directions. It may be hard to look upward. The area in which objects can be seen (visual field) gets smaller.

Aging eyes also may not produce enough tears. This leads to dry eyes which may be uncomfortable. When dry eyes are not treated, infection, inflammation, and scarring of the cornea can occur. You can relieve dry eyes by using eye drops or artificial tears.

Common eye disorders that cause vision changes that are NOT normal include:

    -- clouding of the lens of the eye -- rise in fluid pressure in the eye -- disease in the macula (responsible for central vision) that causes vision loss -- disease in the retina often caused by diabetes or high blood pressure

If you are having vision problems, discuss your symptoms with your provider.

The senses of taste and smell work together. Most tastes are linked with odors. The sense of smell begins at the nerve endings high in the lining of the nose.

You have about 10,000 taste buds. Your taste buds sense sweet, salty, sour, bitter, and umami flavors. Umami is a taste linked with foods that contain glutamate, such as the seasoning monosodium glutamate (MSG).

Smell and taste play a role in food enjoyment and safety. A delicious meal or pleasant aroma can improve social interaction and enjoyment of life. Smell and taste also allow you to detect danger, such as spoiled food, gases, and smoke.

The number of taste buds decreases as you age. Each remaining taste bud also begins to shrink. Sensitivity to the five tastes often declines after age 60. In addition, your mouth produces less saliva as you age. This can cause dry mouth, which can affect your sense of taste.

Your sense of smell can also diminish, especially after age 70. This may be related to a loss of nerve endings and less mucus production in the nose. Mucus helps odors stay in the nose long enough to be detected by the nerve endings. It also helps clear odors from the nerve endings.

Certain things can speed up the loss of taste and smell. These include diseases, smoking, and exposure to harmful particles in the air.

Decreased taste and smell can lessen your interest and enjoyment in eating. You may not be able to sense certain dangers if you cannot smell odors such as natural gas or smoke from a fire.

If your senses of taste and smell have diminished, talk to your provider. The following may help:

  • Switch to a different medicine, if the medicine you take is affecting your ability to smell and taste.
  • Use different spices or change the way you prepare food.
  • Buy safety products, such as a gas detector that sounds an alarm you can hear.

The sense of touch makes you aware of pain, temperature, pressure, vibration, and body position. Skin, muscles, tendons, joints, and internal organs have nerve endings (receptors) that detect these sensations. Some receptors give the brain information about the position and condition of internal organs. Though you may not be aware of this information, it helps to identify changes (for example, the pain of appendicitis).

Your brain interprets the type and amount of touch sensation. It also interprets the sensation as pleasant (such as being comfortably warm), unpleasant (such as being very hot), or neutral (such as being aware that you are touching something).

With aging, sensations may be reduced or changed. These changes can occur because of decreased blood flow to the nerve endings or to the spinal cord or brain. The spinal cord transmits nerve signals and the brain interprets these signals.

Health problems, such as a lack of certain nutrients, can also cause sensation changes. Brain surgery, problems in the brain, confusion, and nerve damage from injury or long-term (chronic) diseases such as diabetes can also result in sensation changes.

Symptoms of changed sensation vary based on the cause. With decreased temperature sensitivity, it can be hard to tell the difference between cool and cold and hot and warm. This can increase the risk of injury from frostbite, hypothermia (dangerously low body temperature), and burns.

Reduced ability to detect vibration, touch, and pressure increases the risk of injuries, including pressure ulcers (skin sores that develop when pressure cuts off blood supply to the area). After age 50, many people have reduced sensitivity to pain. Or you may feel and recognize pain, but it does not bother you. For example, when you are injured, you may not know how severe the injury is because the pain does not trouble you.

You may develop problems walking because of reduced ability to perceive where your body is in relation to the floor. This increases your risk of falling, a common problem for older people.

Older people can become more sensitive to light touches because their skin is thinner.

If you have noticed changes in touch, pain, or problems standing or walking, talk with your provider. There may be ways to manage the symptoms.


Our perception of sound depends on the biological equipment we are born with—our ears and our brains. How does the ear decode the acoustic information that we receive, and what can we learn about music from an understanding of how the ear and the brain respond to sound?

Sound is composed of pressure fluctuations in a medium (for example, the air). The pressure fluctuations enter the ear through the ear canal that ends with the eardrum (see Figure 1). Vibrations at the eardrum are carried to the cochlea by three tiny bones—the malleus, incus , and stapes (collectively called the “ossicles”). The cochlea is a narrow fluid-filled tube curled up into a spiral. Running the length of the tube is a thin sheet of tissue called the “basilar membrane.” Vibrations of the ossicles produce sound waves in the cochlear fluid, which cause the basilar membrane to vibrate. These vibrations are converted into electrical impulses in the auditory nerve, which carries information about the sound to the brain.

The ear is exquisitely sensitive to sound. We can hear vibrations of the eardrum of less than a tenth the width of a hydrogen atom! The ear is also very good at separating out the different frequency components of a sound (e.g., the different harmonics that make up a complex tone). Each place on the basilar membrane is tuned to a different frequency (Figure 2), so that low-frequency sounds cause the membrane to vibrate near the top (apex) of the spiral, and high-frequency sounds cause the membrane to vibrate near the bottom (base) of the spiral. Each nerve cell or neuron in the auditory nerve is connected to a single place on the basilar membrane, so that information about different frequencies travels to the brain along different neurons.

The ear acts a bit like a prism for sound. A prism separates out the different frequencies of light (red, yellow, green, blue etc.) to produce a spectrum (also seen in a rainbow, of course). Similarly, the ear separates out the different frequencies of sound to produce an acoustic spectrum. Actually, the human eye can distinguish just three basic colors: The vivid sensation of color we experience is made up of combinations of these three sensations. The ear, on the other hand, can separate up to a hundred different sound frequencies, corresponding to the number of frequencies that can be separated by the basilar membrane. We get a much more detailed experience of the “color” of sounds (timbre) than we do of the color of light. This is how we can tell the difference between two different instruments playing the same note, for example, a French horn and a cello both playing C3. Although the pitch of the two instruments is the same, the timbre—which is determined by the relative levels of the harmonics—is different (Figure 3). By separating out the different harmonics on the basilar membrane, the ear can distinguish between the two sounds.

The sensation of dissonance is determined in part by the response of the basilar membrane. When two notes are played together, dissonance is related to the production of “beats,” which are heard as a regular flutter. We hear beats when two harmonics are too close together in frequency to be separated by the basilar membrane. For simple frequency ratios, many of the harmonics of the two tones coincide (e.g., the third harmonic of a 440–Hz fundamental has the same frequency—1320 Hz—as the second harmonic of a 660-Hz fundamental). These simple ratios are heard as consonant. For complex ratios, many of the harmonics from the two tones do not coincide exactly, and those harmonics that are close together in frequency interact on the basilar membrane to produce beating sensations that lead to a sensation of dissonance.

The brain processes the electrical signals from the cochlea using vastly complicated networks of specialized neurons in the brain. The way the sound is analyzed depends on our own personal experience to a certain extent. The strengths of the connections between neurons change as we experience sounds, particularly during early infancy when the brain is growing rapidly.

Tonal musical instruments vibrate to produce regular, repetitive , patterns of pressure fluctuations (Figure 4) (as opposed to some percussion instruments, such as a cymbal, that produce irregular or impulsive sound waveforms). The frequency at which the instrument vibrates determines the frequency of the pressure fluctuations in the air, which in turn determines the pitch that we hear.

Pitch is the sensation corresponding to the repetition rate of a sound wave. Pitch is represented in the brain in terms of the pattern of neural impulses (Figure 5). When a tone is played to the ear, neurons will tend to produce electrical impulses synchronized to the frequency of the tone, or to the frequencies of the lower harmonics. An individual neuron may not fire on every cycle, but across an array of neurons the periodicity of the waveform is well represented. Indeed, if you record the electrical activity of the auditory nerve when a melody is played, you can hear the melody in the electrical impulses!

The highest frequency that can be represented in this way is about 5000 Hz. Above this frequency, neurons cannot synchronize their impulses to the peaks in the sound waveform. This limit is reflected in the frequency range of musical instruments: The highest note on an orchestral instrument (the piccolo) is about 4500 Hz. Melodies played using frequencies above 5000 Hz sound rather peculiar. You can tell that something is changing but it doesnÕt sound “melodic” in any way.

Pitch may be decoded by specialized neurons in the brain that are sensitive to different rates of neural impulses. It seems that the information from the first eight harmonics is the most important in determining pitch. The basilar membrane can separate out these first few harmonics, and a trained listener can “hear out” each harmonic in turn, by carefully attending to the individual harmonic frequencies. The frequencies of the low harmonics are coded individually by regular patterns of activity in separate neurons, and the brain combines the information to derive pitch. For example, if harmonics of 880 Hz, 1320 Hz, and 1760 Hz are identified, then the brain can work out that the fundamental frequency of the waveform is 440 Hz (the highest common factor of these three).

A melody is composed of a sequence of tones with different frequencies. A melody is characterized by the intervals between the individual frequencies (i.e., the frequency ratios between the notes), rather than by the absolute frequencies of the notes. I can play “Twinkle, Twinkle Little Star” in any key I like, and the melody will still be instantly recognizable. We can easily form memories for a sequence of musical intervals, but most of us do not have an internal reference or memory for absolute frequency. There are individuals (perhaps 0.1 percent of the population) who can instantly identify the note being played, in the absence of any external cues. These individuals are said to have perfect or absolute pitch , and may have acquired their skill by exposure to standard frequencies during a critical learning period in childhood.

Individuals with absolute pitch have the ability to form a stable representation of pitch in their memories, which they can use as a standard reference to compare with the pitch of any sound in the environment. These individuals also have a way of labeling the pitch they experience in terms of the language of music. This latter ability is sometimes ignored. It has been argued that there are many people with a stable memory for pitch who cannot provide a musical label in the way associated with absolute pitch, but can, for example, hum or sing a tune from a recording they know with a good frequency match to the original.

Following Musical Sequences

In many situations, we experience a number of different sounds at the same time. This is particularly true if we are listening to an ensemble of musicians, when we may be receiving several different melodies at once. All the sound waves from the different instruments add together in the air, so that our ears receive a sum of the sound waves. It is like trying to work out what swimming strokes several different swimmers on a lake are using, just by looking at the complex patterns of ripples that arrive at the shore. How do our ears make sense of all this?

One of the ways the ear can separate out sounds that occur close together is in terms of their pitches. If the notes from two different sequences cover the same range of frequencies, the melodies are not heard separately, and a combined tune is heard. If the frequency ranges are separated (for example, if one melody is in a difference octave) then two distinct melodies are heard (Figure 6). Some composers (e.g., Bach, Telemann, Vivaldi) have used this property of hearing to enable a single instrument (such as a flute) to play two tunes at a time, by rapidly alternating the notes between a low-frequency melody and a high-frequency melody. Looking at this in another way, the earÕs tendency to separate sequences of notes by pitch constrains (to a certain extent) the melodies that can be used in music. If the frequency jump in a musical line is too great, then the ear may not be able to fuse the notes into a single sequence. The effect is also dependent on the rate at which the notes are played. Melodies with rates slower than about two notes a second can be fused even if the frequency jump between notes is quite large.

We can also use the timbres of different instruments to separate melodies and rhythms, even if the notes cover the same frequency range. For example, a melody played on a French horn can be separated from a melody played on a cello, even if the notes used are similar in frequency. Again, the separation is stronger for rapid sequences of notes. As we learned earlier, instruments with different timbres produce different patterns of excitation on the basilar membrane. The ear is very good at distinguishing different patterns of harmonics.

Finally, we can use our two ears to separate sounds coming from different directions. A sound from the right arrives at the right ear before the left and is more intense in the right ear than the left. The brain uses these differences to localize sound sources, and we can easily attend to the sequence of sounds that come from a specific point in space. Each instrument in an ensemble occupies a single location, and this helps us to separate out the different melodies. For this same reason, stereo musical recordings (which contain cues to location) sound much clearer than mono recordings.

Why does music have such a strong psychological effect on us?

The brain is very good at learning associations between events. A piece of music may be associated strongly with a particular place or time. If we hear a piece of music during an emotional experience (falling in love, death of a relative) the piece of music may gain the power to conjure up that emotion. A primitive region of the brain called the amygdala seems to be important in making emotional connections such as this. The amygdala controls another region of the brain called the hypothalamus , that in turn controls the release of hormones such as adrenalin, and basic bodily functions such as the beating of the heart and respiration. In this way, emotional stimuli can produce physiological changes in our bodies. Music can cause stress and fear reactions similar to those produced by events that are truly dangerous.

Some chords or sequences of notes seem broadly connected with sad feelings (e.g., minor modes) and others with happy feelings (e.g. major modes). Part of this might be due to learned associations, although even three-year-old children associate minor and major modes in this way. It is possible that consonant musical intervals, such as those involved in major triads, may lead naturally to a positive and upbeat feeling.

Another component of music that can be used for emotional effect is rhythm. I am reminded of the menacing increase in tempo during a shark attack in Jaws. Again, we may learn to associate certain rhythms with particular feelings, although it is clear that, physiologically , a slow tempo reflects withdrawal and depression (slowing down of natural rhythms) and a fast tempo reflects excitement (increase in breathing, heart rate etc.). It seems likely, therefore, that part of the emotional response to rhythm is innate. Indeed, it has been suggested that one of the reasons minor modes sound sad is that they are often played with slow tempi, and children form the association at an early age.

In many ways, music is like a spoken language, and like a spoken language we need to learn the language before we can appreciate the meaning that is being expressed. An American needs to learn to understand Chinese music, just as he must learn to understand Mandarin or Cantonese. Similarly, most children in the West receive intense exposure to harmonic, consonant, major mode, music. To break away from this brain washing requires a degree of commitment on the part of the listener. It might also help to have the right genes . The evolutionary psychologist Geoffrey Miller has suggested that music (like other art forms) is a “fitness indicator.” According to Miller, musical ability indicates to potential mates that we have good genes that will benefit our progeny. If this hypothesis is correct, then we would expect musical ability to be inherited, and there is some evidence for this. Genetically identical twins are more alike in their musical talents than non-identical twins (although childhood environment plays a greater role). So while we may never discover a “gene for appreciation of avant-garde music” (genetics is rarely this simple), given that many other aspects of our personalities have been shown to be inherited to some extent, it is at least plausible that some individuals are naturally more receptive to new musical ideas.

  • Bregman, A.S. (1990). Auditory Scene Analysis: The Perceptual Organization of Sound . Cambridge, USA: MIT Press. The definitive work on how sounds are organized by the ear, including a chapter on music.
  • Deutsch, D. (Ed.) (1999). The Psychology of Music (2nd ed.). London: Academic Press. Covers everything from acoustics, to music perception, to music performance.
  • Moore, B.C.J. (2003). An Introduction to the Psychology of Hearing (5th ed.). London: Academic Press. A comprehensive yet readable account of hearing, including the basic physiology of the ear.
  • Plack, C.J. (2005). The Sense of Hearing . Mahwah, New Jersey: Laurence Erlbaum Associates. My new book—out soon!

Professor Chris Plack was born in Exeter, England, in 1966. He studied Natural Sciences at the University of Cambridge as an undergraduate, and gained a PhD in psychoacoustics at the same institution. Since then he has worked as a research scientist at the University of Minnesota and at the University of Sussex, England, and now teaches in the Department of Psychology at the University of Essex, England. Professor Plack is a Fellow of the Acoustical Society of America, and a member of the Association for Research in Otolaryngology.


Ear-pinging, tongue-buzzing tech used to treat tinnitus

Tinnitus is an aggravating disorder, causing sufferers to constantly hear a ringing in their ears. A new system could help, though, by simultaneously zapping their tongue and delivering sounds to their ears.

Known as Lenire, the setup is made by Dublin, Ireland-based Neuromod Devices. It consists of a handheld control unit, a set of Bluetooth headphones, and a "Tonguetip" device that is placed in the mouth. While sounds emitted by the headphones stimulate the wearer's auditory nerve, electrodes on the Tonguetip stimulate the trigeminal nerve in the tip of their tongue.

Via a process called bimodal neuromodulation, in which two types of sensory input are stimulated at once, this procedure is claimed to retrain the misfiring neurons in the patient's auditory system. As a result, their tinnitus is reportedly diminished.

A diagram of the Lenire system

The system was recently the subject of a large clinical trial, conducted by Neuromod Devices staff working with colleagues from Germany's University of Regensburg, Britain's University of Nottingham, the University of Texas at Dallas, and Trinity College Dublin. In that trial, 326 patients with different types of tinnitus were instructed to use the Lenire system for 60 minutes a day over the course of 12 weeks.

After the treatment period was over, 86.2 percent of the test subjects (who successfully followed the routine) were found to have achieved "a statistically significant reduction in tinnitus symptom severity" – this assessment was based on the commonly used Tinnitus Handicap Inventory and Tinnitus Functional Index. The reduction persisted even 12 months later, and no unwanted side effects were reported.

Another clinical trial is now underway, to gauge the effects of altering the stimulation pattern over time.

The research is described in a paper that was recently published in the journal Science Translational Medicine.


Watch the video: Sluchově-rovnovážné ústrojí (December 2021).