Publication
ACL-IJCNLP 2015
Conference paper

Reducing infrequent-token perplexity via variational corpora

View publication

Abstract

Recurrent neural network (RNN) is recognized as a powerful language model (LM). We investigate deeper into its performance portfolio, which performs well on frequent grammatical patterns but much less so on less frequent terms. Such portfolio is expected and desirable in applications like autocomplete, but is less useful in social content analysis where many creative, unexpected usages occur (e.g., URL insertion). We adapt a generic RNN model and show that, with variational training corpora and epoch unfolding, the model improves its performance for the task of URL insertion suggestions.

Date

26 Jul 2015

Publication

ACL-IJCNLP 2015

Share