About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ASRU 2003
Conference paper
Forward-backward modeling in statistical natural concept generation for interlingua-based speech-to-speech translation
Abstract
Natural concept generation is critical to statistical interlingua-based speech-to-speech translation performance. To improve maximum-entropy-based concept generation, a forward-backward modeling approach is proposed, which generates concept sequences in the target language by selecting the hypothesis with the highest combined conditional probability based on both the forward and backward generation models. Statistical language models are further applied to utilize word-level context information. The concept generation error rate is reduced by over 20% in our speech translation corpus within limited domains. Improvements are also achieved in our experiments on speech translation.