Publication
NAACL-HLT 2012
Workshop paper

Deep neural network language models

Abstract

In recent years, neural network languagemodels (NNLMs) have shown success in both peplexity and word error rate (WER) compared to conventional n-gram language models. Most NNLMs are trained with one hidden layer. Deep neural networks (DNNs) with more hidden layers have been shown to capture higher-level discriminative information about input features, and thus produce better networks. Motivated by the success of DNNs in acoustic modeling, we explore deep neural network language models (DNN LMs) in this paper. Results on a Wall Street Journal (WSJ) task demonstrate that DNN LMs offer improvements over a single hidden layer NNLM. Furthermore, our preliminary results are competitive with a modelM language model, considered to be one of the current state-of-the-art techniques for language modeling.

Date

Publication

NAACL-HLT 2012

Share