About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ICASSP 2000
Conference paper
Speech recognition for a distant moving speaker based on HMM composition and separation
Abstract
This paper describes a hands-free speech recognition method based on HMM composition and separation for speech contaminated not only by additive noise but also by an acoustic transfer function. The method realizes an improved user interface such that a user is not encumbered by microphone equipment in noisy and reverberant environments. In this approach, an attempt is made to model acoustic transfer functions by means of an ergodic HMM. The states of this HMM correspond to different positions of the sound source. It can represent the positions of the sound sources, even if the speaker moves. The HMM parameters of the acoustic transfer function are estimated by HMM separation. The method is obtained through the reverse of the process of HMM composition, where the model parameters are estimated by maximizing the likelihood of adaptation data uttered from an unknown position. Therefore, measurement of impulse responses is not required. In this paper, we record the speech of a distant moving speaker in real environments. The results of experiments for the speech of a distant moving speaker clarified the effectiveness of HMM composition and separation.