Otocontrol: Improved spectral-temporal sensitivity and binaural integration in the hearing impaired through automated sensorineural feedback.
Better cooperating ears
About ten percent of the Dutch population suffers to some extent from hearing loss. To restore hearing capacity, two major technological solutions are available: hearing aids, which essentially record the sound with microphones and amplify it, and cochlear implants, which translate the sound into an electrical signal, which is directly fed into the auditory nerves.
Integrating mixed signals
People with hearing impairments often wear aids in both ears. They can combine two hearing aids, two cochlear implants or a hearing aid with a cochlear implant. In all cases, the brain receives different signals from both ears. Not only because of the difference in working mechanisms between cochlear implants and conventional hearing aids, but also because seldom both ears are impaired in exactly the same way. The brain integrates the information coming from both ears. When both signals differ profoundly, the patient for example may have great difficulty to understand speech as well as he or she might.
From brain signal to optimized device
The aim of this project is to record directly from the brain the signals received from both ears, in combination with a perceptual response of the patient to a presented sound, and to use these measurements to optimize the integration of the two hearing devices for each patient individually. At the moment, these devices are optimized subjectively: patients are asked to fill out questionnaires about what they can and can’t hear. This project should result in an objective, automated system, which generates abstract sound patterns to which the patient should react, measures the resulting signals in the brain of the patient, and learns to find the optimal settings for his or her hearing devices. This solution may not be possible for everybody though, as sometimes the difference in hearing capacity between the two ears is just too big (e.g. when there is no frequency overlap).
The researchers will start by exposing healthy people to some 1000 different broadband sounds in 15 minutes. Their brain responses are measured with different techniques, such as 4DEEG and Near Infrared Spectroscopy. These experiments will result in a model describing the influence of different parameters of the hearing devices on the perceived sound in the brain. The model will then be tested with patients. Eventually, the researchers hope to develop a system called Otocontrol, which can be used in the clinical practice of ENT specialists.
Patrick Boyle, Senior Director External Research Advanced Bionics/Sonova
‘We produce both hearing aids and cochlear implants. This project hopefully provides us with new insights to improve the sound experience of our customers. Not only do the researchers get access to our devices, but we also share our calibration algorithms and specialized measurement techniques and research software with them. For hearing impaired, one of the most difficult things is to determine the direction sound is originating from and to focus on that one sound. Healthy people can eliminate non relevant sounds, it would be great if we can also achieve that for hearing impaired.’
Erasmus University Rotterdam, Radboud University Medical Center, Radboud University Nijmegen, Advanced Bionics/Sonova, Artinis, Twente Medical Systems International