hearing(redirected from hear)
Also found in: Dictionary, Thesaurus, Medical, Legal, Financial, Acronyms, Idioms, Wikipedia.
organ of hearing and equilibrium. The human ear consists of outer, middle, and inner parts. The outer ear is the visible portion; it includes the skin-covered flap of cartilage known as the auricle, or pinna, and the opening (auditory canal) leading to the eardrum (tympanic
..... Click the link for more information. .
The general perceptual behavior and the specific responses that are made in relation to sound stimuli. The auditory system consists of the ear and the auditory nervous system. The ear comprises outer, middle, and inner ear. The outer ear, visible on the surface of the body, directs sounds to the middle ear, which converts sounds into vibrations of the fluid that fills the inner ear. The inner ear contains the vestibular and the auditory sensory organs. See Ear (vertebrate)
The auditory part of the inner ear, known as the cochlea because of its snaillike shape, analyzes sound in a way that resembles spectral analysis. It contains the sensory cells that convert sounds into nerve signals to be conducted through the auditory portion of the eighth cranial nerve to higher brain centers. The neural code in the auditory nerve is transformed as the information travels through a complex system of nuclei connected by fiber tracts, known as the ascending auditory pathways. They carry auditory information to the auditory cortex, which is the part of the sensory cortex where perception and interpretation of sounds are believed to take place. Interaction between the neural pathways of the two ears makes it possible for a person to determine the direction of a sound's source. See Brain
Role of the ear
The pinna, the projecting part of the outer ear, collects sound, but because it is small in relation to the wavelengths of sound that are important for human hearing, the pinna plays only a minor role in hearing. The ear canal acts as a resonator: it increases the sound pressure at the tympanic membrane in the frequency range between 1500 and 5000 Hz. The difference between the arrival time of a sound at each of the two ears and the difference in the intensity of the sound that reaches each ear are used by the auditory nervous system to determine the location of the sound source.
Sound that reaches the tympanic membrane causes the membrane to vibrate, and these vibrations set in motion the three small bones of the middle ear: the malleus, the incus, and the stapes. The footplate of the stapes is located in an opening of the cochlear bone—the oval window. Moving in a pistonlike fashion, the stapes sets the cochlear fluid into motion and thereby converts sound (pressure fluctuations in the air) into motion of the cochlear fluid. Motion of the fluid in the cochlea begins the neural process known as hearing.
There are two small muscles in the middle ear: the tensor tympani and the stapedius muscles. The former pulls the manubrium of the malleus inward, while the latter is attached to the stapes and pulls the stapes in a direction that is perpendicular to its pistonlike motion. The stapedius muscle is the smallest striated muscle in the body, and it contracts in response to an intense sound. This is known as the acoustic middle-ear reflex. The muscle's contraction reduces sound transmission through the middle ear and thus acts as a regulator of input to the cochlea. Perhaps a more important function of the stapedius muscle is that it contracts immediately before and during a person's own vocalization, reducing the sensitivity of the speaker's ears to his or her own voice and possibly reducing the masking effect of an individual's own voice. The role of the tensor tympani muscle is less well understood, but it is thought that contraction of the tensor tympani muscle facilitates proper ventilation of the middle-ear cavity. These two muscles are innervated by the facial (VIIth) nerve for the stapedius and the trigeminal (Vth) nerve for the tensor tympani. The acoustic stapedius reflex plays an important role in the clinical diagnosis of disorders affecting the middle ear, the cochlea, and the auditory nerve.
Vibrations in the cochlear fluid set up a traveling wave on the basilar membrane of the cochlea. When tones are used to set the cochlear fluid into vibration, one specific point on the basilar membrane will vibrate with a higher amplitude than any other. Therefore, a frequency scale can be laid out along the basilar membrane, with low frequencies near the apex and high frequencies near the base of the cochlea.
The sensory cells that convert the motion of the basilar membrane into a neural code in individual auditory nerve fibers are located along the basilar membrane. They are also known as hair cells, because they have hairlike structures on their surfaces. The hair cells in the mammalian cochlea function as mechanoreceptors: motion of the basilar membrane causes deflection of the hairs, starting a process that eventually results in a change in the discharge rate of the nerve fiber connected to each hair cell. This process includes the release of a chemical transmitter substance at the base of the hair cells that controls the discharge rate of the nerve fiber (see illustration).
The frequency selectivity of the basilar membrane provides the central nervous system with information about the frequency or spectrum of a sound, because each auditory nerve fiber is “tuned” to a specific frequency. The frequency of a sound is also represented in the time pattern of the neural code, at least for frequencies up to 5 kHz. Thus, the frequency or spectrum of a sound can be coded for place and time in the neural activity in the auditory nervous system. See Audiometry
Auditory nervous system
The ascending auditory nervous system consists of a complex chain of clusters of nerve cells (nuclei), connected by nerve fibers (nerve tracts). The chain of nuclei relays and transforms auditory information from the periphery of the auditory system, the ear, to the central structures, or auditory cortex, which is believed to be associated with the ability to interpret different sounds. Neurons in the entire auditory nervous system are, in general, organized anatomically according to the frequency of a tone to which they respond best, which suggests a tonotopical organization in the auditory nervous system and underscores the importance of representations of frequency in that system. However, when more complex sounds were used to study the auditory system, qualities of sounds other than frequency or spectrum were found to be represented differently in different neurons in the ascending auditory pathway, with more complex representation in the more centrally located nuclei. Thus, the response patterns of the cells in each division of the cochlear nucleus are different, which indicates that extensive signal processing is taking place. Although the details of that processing remain to be determined, the cells appear to sort the information and then relay different aspects of it through different channels to more centrally located parts of the ascending auditory pathway. As a result, some neurons seem to respond only if more than one sound is presented at the same time, others respond best if the frequency or intensity of a sound changes rapidly, and so on.
Another important feature of the ascending auditory pathway is the ability of particular neurons to signal the direction of sound origination, which is based on the physical differences in the sound reaching the two ears. Certain centers in the ascending auditory pathway seem to have the ability to compute the direction to the sound source on the basis of such differences in the sounds that reach the ears.
Knowledge of the descending auditory pathway is limited to the fact that the most peripheral portion can control the sensitivity of the hair cells.
The ability to perceive sound arriving from distant vibrating sources through the environmental medium (such as air, water, or ground). The primary function of hearing is to detect the presence, identity, location, and activity of distant sound sources. Sound detection is accomplished using structures that collect sound from the environment (outer ears), transmit sound efficiently to the inner ears (via middle ears), transform mechanical motion to electrical and chemical processes in the inner ears (hair cells), and then transmit the coded information to various specialized areas within the brain. These processes lead to perception and other behaviors appropriate to sound sources, and probably arose early in vertebrate evolution.
Sound is gathered from the environment by structures that are variable among species. In many fishes, sound pressure reaching the swim bladder or another gas-filled chamber in the abdomen or head causes fluctuations in volume that reach the inner ears as movements. In addition, the vibration of water particles that normally accompany underwater sound reaches the inner ears to cause direct, inertial stimulation. In land animals, sound causes motion of the tympanic membrane (eardrum). In amphibians, reptiles, and birds, a single bone (the columella) transmits tympanic membrane motion to the inner ears. In mammals, there are three interlinked bones (malleus, incus, and stapes). Mammals that live underground may detect ground-borne sound via bone conduction. In whales and other sea mammals, sound reaches the inner ears via tissue and bone conduction.
The inner ears of all vertebrates contain hair-cell mechanoreceptors that transform motion of their cilia to electrochemical events resulting in action potentials in cells of the eighth cranial nerve. Patterns of action potentials reaching the brain represent sound wave features in all vertebrates. All vertebrates have an analogous set of auditory brain centers. See Ear (vertebrate)
Experiments show that vertebrates have more commonalities than differences in their sense of hearing. The major difference between species is in the frequency range of hearing, from below 1 Hz to over 100,000 Hz. In other fundamental hearing functions (such as best sensitivity, sound intensity and frequency discrimination acuity, time and frequency analysis, and source localization), vertebrates have much in common. All detect sound within a restricted frequency range. All species are able to detect sounds in the presence of interfering sounds (noise), discriminate between different sound features, and locate the sources of sound with varying degrees of accuracy.
The sensitivity range is similar among all groups, with some species in all groups having a best sensitivity in the region of -20 to 0 dB. Fishes, amphibians, reptiles, and birds hear best between 100 and 5000 Hz. Only mammals hear at frequencies above 10,000 Hz. Humans and elephants have the poorest high-frequency hearing.
a function of man and animals that enables them to perceive sounds. It is effected by mechanical receptors and nerves that constitute the auditory system. In man, sounds produce an acoustic sensation that reflects the parameters of the sound signals. For example, the intensity or frequency of acoustic vibrations is perceived as loudness or pitch.
The nature of hearing varies greatly among animal species according to their evolutionary level, habitat, and those features of sound signals that are of biological significance for each species. Insects were the first animals to develop an auditory system. Such a system exists in all vertebrates and is most fully developed in mammals, whose perception of sounds results from a systematic analysis of information received in the auditory system.
As sound waves pass through the external auditory meatus (external ear), they cause the tympanic membrane to vibrate. The vibrations are transmitted through the connected ossicles in the middle ear to the liquid mediums—the perilymph and endolymph—of the inner ear. The resulting hydromechanical vibrations cause the cochlear membrane (the basilar membrane with surface receptors) to vibrate. Owing to the lengthwise gradient of the mechanical properties of the basilar membrane, vibrations of maximum amplitude occur at the base of the cochlea with high frequencies of stimulation, and at the cochlea’s apex with low frequencies.
The organ of Corti transforms this mechanical energy into excitation of the receptors. This excitation in turn stimulates the fibers of the acoustic nerve. The action potential thus produced is transmitted to the central auditory system. Sounds may be perceived both by air conduction and by bone conduction, that is, by means of the bones of the skull.
Hearing may be tested by examining the auditory system as a whole with psychoacoustic methods, which measure sound perception by articulation response and by observing the body’s motor or autonomic reactions. Hearing may also be tested by examining the individual elements of the auditory system. This is done by investigating the bioelectric potentials of the receptors and nerves of the auditory system and by investigating the transmissive activity of this system’s mechanical formations.
When hearing is examined by psychoacoustic methods (pure tones are generally used as stimuli), the sensitivity of hearing is evaluated from the absolute threshold of audibility, which is the minimum sound pressure in decibels (dB) that can be heard by the subject. The range of perceived frequencies extends in an audibility curve, which indicates the relationship between the absolute threshold of audibility and the tone frequency in hertz (Hz) or kilohertz(kHz).
Man perceives frequencies from 10–20 Hz to approximately 20 kHz. Frequencies lower than 10 Hz are not perceived as continuous sound. There is evidence that frequencies higher than 20 kHz can be perceived if sound is conducted through the bones of the skull. The lowest threshold of audibility in man is at frequencies of 1–3 kHz, a threshold intensity of sound of about 2 × 10-5 newtons/m/2). Sounds of very high intensity cause pain whose threshold is about 140 dB above the 2 × 10-5 newtons/m2 level. In some animals, the range of frequencies perceived differs significantly from that in man. For example, 50–100 Hz to 3–5 kHz in fishes and 100 Hz to 200 kHz in dolphins.
Auditory discrimination is measured by differential thresholds, which specify the minimum perceptible change in a parameter of sound. In man, within an average range of sound intensities and frequencies, the differential threshold for intensity is 0.3–0.7 dB and the threshold for frequency is 2–8 Hz. The intensification of a sound increases the ability to discriminate (the differential threshold becomes lower). The ability to discriminate between sounds may also be manifested in the perception of speech signals and of tonal intervals in music. The ability to specify the absolute pitch of musical sounds is termed absolute pitch.
Over a period of time, the auditory system can accumulate information about sound signals. This ability is manifested by a lowering of the thresholds of audibility and of the differential thresholds for intensity and frequency when there is an increase (to certain critical limits) in the duration of the sound signals. The perception of sounds may decrease to the point of complete disappearance in the presence of other sounds, a phenomenon known as masking. Hearing sensitivity is impaired by prolonged exposure to loud sounds.
Hearing also identifies the location of a sound source. This is generally effected through the interaction of the two symmetrical halves of the auditory system, an interaction known as the binaural effect. The main parameters of sound permitting spatial localization when the sound source shifts from the midline of the head are the sound signals’ interaural differences in terms of the moment they are perceived and of their intensity. The differences between their intensities results from the head’s shadow effect. Bats, dolphins, and certain birds have a special type of hearing, echolocation, which enables them to determine the location, shape, size, and physical composition of objects by means of sounds emitted by the animals themselves and then reflected back from the objects.
Current theories of hearing deal with the auditory system’s detection and discrimination of sounds. For example, frequency analysis in hearing is viewed as the result of the spectral decomposition of a signal along the frequency axis of the cochlear membrane—a concept formulated by H. L. F. von Helm-holtz in the 19th century. According to the place theory, this process is followed by the excitation of groups of neurons in the central part of the auditory system. The place theory was supplemented by the concept of time-and-frequency analysis, which substantiates the analysis of the periodicity of signals. Thus, the process of hearing effects both a spectral and a temporal analysis of frequency.
REFERENCESZwicker, E., and R. Fel’dkeller. Ukho kak priemnik informatsii. Moscow, 1971. (Translated from German.)
Fiziologiia sensornykh sistem, part 2. Leningrad, 1972. Chapters 4–13.
Somjen, G. Kodirovanie sensornoi informatsii v nervnoi sisteme mlekopilaiushchikh. Moscow, 1975. (Translated from English.)
Békésy, G. von. Experiments in Hearing. New York-Toronto, 1960.
Basic Mechanisms in Hearing. Edited by A. R. Møller. London-New York, 1973.
Foundations of Modern Auditory Theory, vols. 1–2. New York-London, 1970–72.
IA. A. AL’TMAN