Self-critical sequence training for image captioning
Steven Rennie, Etienne Marcheret, et al.
CVPR 2017
In recent years, server-based automatic speech recognition (ASR) systems have become ubiquitous, and unprecedented amounts of speech data are now available for system training. The availability of such training data has greatly improved ASR accuracy, but how to maximize the ASR performance in new domains or domains where ASR systems currently fail (thus limiting data availability) is still an important open question. In this paper, we propose a framework for mapping large speech corpora to different acoustic environments, so that such data can be transformed to build high-quality acoustic models for other acoustic domains. In our experiments using a large corpus, our proposed method reduced errors by 18.6%.
Steven Rennie, Etienne Marcheret, et al.
CVPR 2017
Osamu Ichikawa, Masafumi Nishimura
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
Takashi Fukuda, Osamu Ichikawa, et al.
INTERSPEECH 2014
Osamu Ichikawa, Takashi Fukuda, et al.
IEEE JSTSP