Publication
INTERSPEECH 2022
Conference paper

VQ-T: RNN Transducers using Vector-Quantized Prediction Network States

Abstract

Beam search, which is the dominant ASR decoding algorithm for end-to-end models, generates tree-structured hypotheses. How- ever, recent studies have shown that decoding with hypothesis merging can achieve a more efficient search with comparable or better performance. But, the full context in recurrent networks is not compatible with hypothesis merging. We propose to use vector-quantized long short-term memory units (VQ-LSTM) in the prediction network of RNN transducers. By training the dis- crete representation jointly with the ASR network, hypotheses can be actively merged for lattice generation. Our experiments on the Switchboard corpus show that the proposed VQ RNN transducers improve ASR performance over transducers with regular prediction networks while also producing denser lattices with a very low oracle word error rate (WER) for the same beam size. Additional language model rescoring experiments also demonstrate the effectiveness of the proposed lattice generation scheme.

Date

18 Sep 2022

Publication

INTERSPEECH 2022

Authors

Topics

Share