Are the words ear and hearing related

High-tech in the inner ear

Hearing occupies a special position among the human sense organs. It not only serves to mediate a constant and non-switchable exchange of information between the environment and the brain, but is also the basic requirement for all interpersonal communication and therefore more important than any other sensory organ for social coexistence. For people who are deaf on both sides, there is only one tried and tested aid, the cochlear implant. Sebastian Hoth from Professor Hagen Weidauer's team at the Heidelberg Ear, Nose and Throat Clinic reports on the only artificial replacement for a sensory organ to date.

Words like obey, obey and belong, whose connection with auditory perception we are usually not even aware of, as well as the proverbial effects of the Babylonian confusion of language make it clear that the structure of human societies is largely determined by the sending and receiving of acoustic signals. On a completely different level, for example when enjoying music, hearing contributes to joy and emotional balance. Probably most important for the development of every human individual is that most of the learning processes, and thus the development of intelligence, is only made possible by a functioning hearing. The relationship between the German words deaf and stupid can be traced back to the English and Dutch vocabulary deaf and stupid, both of which denote the state of deafness. It is probably not possible for the hearing person to assess what it means to have to do without hearing. The state of complete acoustic isolation and the extensive loss of interpersonal contacts often result in emotional instability and severe depression in those affected.
At the moment there is only one tried and tested method of helping patients who are deaf on both sides to regain hearing impressions: the fitting with a cochlear implant. We have been using this method at the Ear, Nose and Throat Clinic at Heidelberg University since 1986. The cochlear implant is not a hearing aid in the conventional sense, in which the sound signals are amplified and fed to the damaged ear via an acoustic path. Rather, as a replacement for the no longer functional inner ear, an electronic device is implanted that effects electrical stimulation of the auditory nerve. It consists of an encapsulated microprocessor about two centimeters in diameter that is inserted into an artificially created recess in the bone behind the ear. A loop-shaped receiver coil and a permanent magnet are attached to the encapsulated housing, as well as a thin tube-shaped extension with 22 platinum electrodes at the tip. During the operation, they are inserted into the lowest one and a half turns of the cochlea. The spatial arrangement of the electrodes within the cochlea is adapted to the natural pitch processing of the inner ear: an electrical stimulation via the foremost electrode triggers the impression of a deep tone, while the rearmost electrodes at the lower, broad end of the cochlea convey high-tone impressions. In addition to the implant to be worn outside, the patient receives a speech processor and a "headset". It consists of a microphone, which - similar to a hearing aid - is worn in a housing behind the auricle, and a transmitter coil, which is equipped with a permanent magnet, like the receiver coil of the implant. The two magnets below and above the scalp allow the outer coil to be adjusted and fixed through the skin. The "headset" is connected to the speech processor via a thin cable. It transmits the microphone signals downwards and the high-frequency pulses upwards. The sound signals are processed in analog and digital form and encoded into a sequence of individual high-frequency pulses that the implant can understand. Electrodes and implanted electronics do not independently stimulate the auditory nerve, they only carry out the instructions given by the speech processor and transmitter coil. Thanks to this construction principle, patients can take part in technical improvements in external speech signal processing without having to undergo another operation: the speech processor is exchanged for the improved model or its speech processing is reprogrammed, the implant remains the same. The energy requirement of the implant is covered by high-frequency pulses that are generated by the speech processor and transmitted through the skin via the transmitter coil worn on the head. Follow-up operations to change the battery are therefore not required. The hearing aid is designed for a lifetime and, according to today's experience, functions well for well over a decade without technical failures. A cochlear implant is used to treat patients who are either completely deaf on both sides or who can hear so difficultly in both ears that conventional hearing aid treatment is of no benefit in terms of speech understanding. Experience has shown that the prospects of success are particularly good in adults who have recently become deaf. If the patient has been deaf for a long time, an operation is still possible. However, the longer it has passed since the deafness, the more the memory of previous hearing experiences fades and the more difficult it will be for the patient to gain useful information from the encrypted speech signals. Implantation is also possible in principle in patients who were born deaf or who have become deaf within the first few years of life - prelingually - but in general no open speech understanding is achieved. After the operation, those treated have to learn the language in the form generated by the speech processor without any useful prior knowledge. This is all the more difficult the older the patient is, because only in childhood does the brain have the performance and plasticity necessary to cope with such difficult learning processes. The prelingual deafness can even be an advantage, because the lack of hearing experience saves the patient much of the disappointment that a postlingually deaf patient experiences when comparing the previous and the new hearing impressions. Certain aptitude tests are required before an operation. In the tone audiogram we determine the frequency-dependent hearing threshold and in the speech audiogram we determine the speech understanding that can be achieved with and without hearing aids. We also check the function of the organ of equilibrium, which is closely related to hearing. Imaging methods such as high-resolution computed tomography and magnetic resonance tomography provide information on whether there are regular anatomical conditions. The detection of fluid-filled cavities in the inner ear is important, as otherwise the intracochlear electrodes cannot be inserted. Many diseases that lead to deafness are accompanied by a slowly progressive ossification of the cochlea. An implant must therefore be inserted as soon as possible, since otherwise there is only the alternative of extracochlear electrodes, with which the patient can generally differentiate the processed acoustic information much more poorly. Unfortunately, the sensitivity and resolution of computer and magnetic resonance imaging are not always sufficient to obtain an absolutely reliable result. A beginning or partial ossification of the scala tympani, which is so important for the implantation, is therefore often only noticed by the surgeon. He then has to decide on a case-by-case basis whether to insert the electrodes into the scala vestibuli instead or to use extracochlear electrodes. The promontory test, the preoperative electrical stimulation of the auditory nerve that provides information about its functionality, is an important decision-making aid for the surgeon. To do this, the examiner sticks a needle electrode from the external auditory canal through the eardrum; its tip lies on the promontory or in the niche of the round window. With the help of a stimulation device, electrical pulses of variable current strength are supplied. For each stimulus frequency, starting at zero current intensity, both the patient's subjective perception and discomfort thresholds can be determined by slowly increasing the stimulus intensity. The audiologist can use this to calculate the dynamic range, provided that the sensations triggered were of an auditory nature. The prerequisites for implantation are favorable if the perception thresholds are low and the discomfort thresholds are high, i.e. a high level of dynamics is available. Frequently, high-frequency stimuli only cause non-auditory sensations, such as stinging or feeling of pressure. Since the location of the irritation during the test does not match that of the electrodes implanted later, the occurrence of such perceptions does not imply a contraindication to implantation. If in doubt, the test will be repeated at a later date. In addition to the dynamic range for stimulus pulses of different frequencies, we examine the time resolution capacity in the "gap detection test": The patient must distinguish interrupted pulses from continuous pulses of the same total duration. Good values ​​for the time resolution are between ten and 50 milliseconds. They allow a careful prognosis that a good understanding of language will be achievable. We test frequency discrimination in a similar way. To implant the microprocessor, the surgeon creates a round bone bed behind the ear, into which he sinks the encapsulated microchip. He inserts the electrode carrier through the cleared mastoid process and the middle ear into an opening immediately in front of the round window in the scala tympani, so that the electrode rings come very close to the peripheral fibers of the auditory nerve and the spiral ganglion cells. The foremost electrode extends about 20 millimeters into the screw. After the operation, the doctors can x-ray check the position of the electrodes. The first functional test is possible after about four weeks, when the implant has healed. Until then, the patient can just as little hear as before the operation, because he is not yet equipped with the speech processor. Occasionally, however, patients report auditory sensations in the operated ear, especially with violent body movements or touching the face, but these do not correlate with acoustic stimuli. These sensations are probably due to electrochemical processes in the newly operated inner ear. The opening of the inner ear spaces affects many patients as a, usually temporary, disruption of the organ of equilibrium, which is closely related to the cochlea. Patients rarely complain of chronic dizziness, which is then difficult to control therapeutically and can be very uncomfortable for the person concerned. The use of electrical stimuli in a region of the head in which a large number of nerve connections come together can also cause balance disorders if the vestibular nerve is inadvertently stimulated electrically. The problem can be recognized - just like occasional facial irritation - when setting the stimulation parameters and can be solved by shutting down the electrodes in question. The speech processor is adjusted for the first time around four weeks after the implantation. With the help of a computer to which this is connected, the threshold and uncomfortable values ​​of the electrical stimulus pulses are determined and set for all electrodes using special hardware and software. Concentrated cooperation and qualified information from the patient are essential here. If individual electrodes do not lead to an auditory sensation despite the high current strength, they can be identified during this procedure and excluded from the transmission of stimuli. With the help of the determined threshold values, we create a signal processing program, try it out on the computer, modify it and finally transfer it to the patient's speech processor. Similar to the adaptation of a hearing aid, but one level higher in the physiology of hearing, this procedure defines a tailor-made rule that maps the frequency and intensity range of external sound signals to the patient's restricted "hearing field". The coding strategy most frequently used in practice by the speech processor is based on the principle of parameter extraction. This is based on the knowledge that cochlear implant patients cannot cope with the high information density associated with linguistic communication due to the limited time and frequency resolution. The normal inner ear with around 30,000 sensory cells and correspondingly high frequency resolution as well as very powerful signal processing is being replaced by a device that can only separate around 20 frequency ranges. Avoiding simultaneous stimulation as far as possible, it must encode the sound intensity in the frequency bands with the aid of relatively coarse and very narrow current pulses in the dynamic range and feed it to the auditory nerves located at an undefined distance. The external speech processor has the task of reducing speech to its essential information-carrying components. To date, however, the details of cochlear signal processing, the function of which is to be mimicked with the aid of an implant and speech processor, are not fully understood. In this somewhat unsatisfactory situation, among many conceivable possibilities, the extraction and encryption of the fundamental frequency, the first and second formants and the total amplitude has proven itself, the "F0F1F2 strategy". The position of the formants, i.e. the frequency ranges of high intensity, with the help of which we differentiate between different vowels, determines the selection of the stimulating electrodes. The base frequency determines the pulse rate on the electrodes. And from the sound intensity averaged over many frequencies, the current strength of the stimulus pulses results - while maintaining the patient-specific limits. The auditory nerve and the downstream information processing are evidently very flexible and can adapt to very different coding strategies in a learning manner. We make use of this when working with patients. With the "multi-peak strategy", for example, we achieve a much better understanding of consonants in most cases by additionally encrypting high-frequency signal components and transmitting them to three electrodes located at the base of the snail. There are many degrees of freedom within each coding strategy: for example, the assignment of a reference electrode to each active electrode or the division of the frequency range between the various electrodes, the course of the input-output characteristics and the position of the threshold for noise suppression. Only the laboratory computer through which the speech processor is programmed has access to the parameters belonging to the degrees of freedom. The patient can only change the input sensitivity and, if necessary, activate a compression circuit. The art of adaptation consists in coming as close as possible to the optimal parameter set for the patient among the many thousands of different settings that are conceivable. Because the technical possibilities of speech processing available today are so diverse and no longer manageable, adaptive adjustment methods are already being tested in the audiological laboratories of many clinics, in which the most favorable parameter combination iteratively in a semi-automatic process, according to programmed rules and taking into account the feedback of the patient is determined. Modern psychoacoustic adjustment aids based on volume scaling and phoneme recognition tests are already available and in their approach far more pragmatic. Immediately after the initial adjustment, but also later, the threshold values ​​can change significantly, on the one hand because the patient gets used to the new stimuli, on the other hand due to interactions between the implant and the surrounding tissue. For this reason, the setting of the speech processor is checked daily in the beginning and changed if necessary based on the patient's subjective statements. The stimulation parameters can only be specifically corrected if its information is precise and reliable. This requirement is often not met, especially in children, but also in prelingually deaf adults. Objective measurements, for example the electrically triggered stapedius reflex or the acoustically evoked potentials, with which the Heidelberg ENT clinic has already gained extensive experience, then serve as a supplement. The audiologist uses the results to obtain information on the location of the acoustic and electrical stimulus thresholds. In parallel to fine-tuning the speech processor, the patient begins the hearing training program. He receives the necessary support to process the new impressions. He begins with simple exercises to distinguish between long and short, as well as high and low tones, and practices as soon as possible to recognize elements of language. Three levels of success are possible: Acoustic orientation, i.e. the perception and recognition of environmental noises, is to be expected in all patients.The implant usually also offers lip reading support for prelingually deaf people, while open speech understanding without eye contact with the speaker is only achieved in postlingually deaf patients. However, the medical history is not the only decisive factor for the success of the rehabilitation. Rather, the patient's mental state and eagerness to learn play a major role, his willingness to experiment and his tolerance for frustration, his or her own and external motivation as well as possible as yet unknown effects of long-term acoustic deprivation. Therefore, a reliable forecast is still not possible to this day. A patient who has been successfully fitted with a cochlear implant can also be classified as severely hearing impaired. This finding made in various court judgments takes into account the fact that a functional failure or a dead battery of the speech processor as well as the detachment of the head coil can put the patient back into a state of complete deafness at any time. As a matter of principle, he cannot use the processor when he is asleep or when doing many sports. In addition, even with optimally successful rehabilitation - i.e. when a purely acoustically imparted open speech understanding is achieved - a patient always remains severely hearing impaired because of the basically one-eared supply and the inadequate suppression of interference signals in noisy surroundings. Thanks to a built-in test protocol, the implant itself is less susceptible to interference signals and the probability that strong high-frequency transmitters trigger auditory or other perceptions is negligible. As a precautionary measure, the porters should bypass electromagnetic metal detectors in air traffic control. In the aircraft they have to switch off their speech processor at least during take-off and landing so that radio traffic and flight computers cannot be disturbed. Cochlear implant wearers are not allowed to undergo magnetic resonance imaging because of the high magnetic fields. After about two decades of practical experience, the provision of a cochlear implant is considered to be a safe, reliable rehabilitation measure that is rarely associated with reoperations and in which the deaf patient can gain a great deal without serious risk. We now have experience with many thousands of people around the world. The only artificial substitute for a sensory organ to date is being used increasingly with great effort and considerable success in children too. The best patients are almost indistinguishable from healthy hearing people in normal conversation. A reliable prognosis of the success is still impossible to this day. If they existed, however, their informative value should not be overestimated. If one takes into account not only the objective criteria of the ultimately achieved speech understanding, but also the subjective satisfaction of the patient being cared for, the benefit is inestimably high even if the formerly deaf patient does not get beyond a rough acoustic orientation.

Author:
Dr. Sebastian Hoth
Ear, Nose and Throat Clinic, Im Neuenheimer Feld 400, 69120 Heidelberg,
Telephone (06221) 56 67 98