About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
INTERSPEECH - Eurospeech 2003
Conference paper
Inline updates for HMMs
Abstract
Most training algorithms for HMMs assume that the whole batch of observation sequences is given ahead of time. This is particularly the case for the the standard EM algorithm. However, in many applications such as speech, the data is generated by a temporal process. Singer and Warmuth developed online updates for HMMs that process a single observation sequence in each update. In this paper we take this approach one step further and develop an inline update for training HMMs. Now the parameters are updated after processing a single symbol of the current observation sequence. The methodology for deriving the online and the new inline update is quite different from the standard EM motivation. We show experimentally on speech data that even when all observation sequences are available (batch mode), then the online update converges faster than the batch update, and the inline update converges even faster. The standard batch EM update exhibits the slowest convergence.