About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Abstract
Automatic speech recognition (ASR) research has traditionally focused on single-talker recognition. In many scenarios, however, the signal of interest is obscured by acoustic interference, including speech from other talkers. The human auditory system takes advantage of stereo inputs our ears to spatially filter the acoustic environment. Microphone array techniques can also take advantage of multiple inputs. However, even when restricted to a single channel, multiple talkers are still parsed remarkably well by humans but are indecipherable to conventional single-talker ASR systems. In fact, robustness to noise, reverberation, and interfering speakers is considered to be one of the six remaining grand challenges of ASR [47], [48]. © 2010 IEEE.