Publication
NAACL-HLT 2013
Conference paper

Discriminative training of 150 million translation parameters and its application to pruning

Abstract

Until recently, the application of discriminative training to log linear-based statistical machine translation has been limited to tuning the weights of a limited number of features or training features with a limited number of parameters. In this paper, we propose to scale up discriminative training of (He and Deng, 2012) to train features with 150 million parameters, which is one order of magnitude higher than previously published effort, and to apply discriminative training to redistribute probability mass that is lost due to model pruning. The experimental results confirm the effectiveness of our proposals on NIST MT06 set over a strong baseline.

Date

Publication

NAACL-HLT 2013

Authors

Topics

Share