About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Review
Multimodal conversational systems for automobiles
Abstract
A multimodal conversational system is developed to provide an intuitive and flexible means for controlling vehicle systems and providing a user with the option to operate the system with speech, touch or any combination of the two. The speech recognition engine makes use of dynamic semantic models that keep track of the current and past contextual information and dynamically modify the language model to increase the accuracy of the speech recognizer. The interaction is controlled by a dialogue manager that responds to input signals with output actions. A general multimodal dialogue-manager architecture is developed that allows for a complete separation between the interaction logic and the input signals.