Yuqing Gao, Yongxin Li, et al.
ICSLP 2000
Traditional statistical models for speech recognition hare mostly been based on a Bayesian framework using generative models such as hidden Markov models (HMMs). This paper focuses on a new framework for speech recognition using maximum entropy direct modeling, where the probability of a state or word sequence given an observation sequence is computed directly from the model. In contrast to HMMs, features can be asynchronous and overlapping. This model therefore allows for the potential combination of many different types of features, which need not be statistically independent of each other. In this paper, a specific kind of direct model, the maximum entropy Markov model (MEMM), is studied. Even with conventional acoustic features, the approach already shows promising results for phone level decoding. The MEMM significantly outperforms traditional HMMs in word error rate when used as stand-alone acoustic models. Preliminary results combining the MEMM scores with HMM and language model scores show modest improvements over the best HMM speech recognizer. © 2006 IEEE.
Yuqing Gao, Yongxin Li, et al.
ICSLP 2000
Wei Zhang, Liang Gu, et al.
ICASSP 2009
Yongxin Li, Yuqing Gao, et al.
ICSLP 2000
Anton Likhododev, Yuqing Gao
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings