In this paper, we study the issue of generating pronunciations for training and decoding with an ASR system for Pashto in the context of a Speech to Speech Translation system developed for TRANSTAC. As with other low resourced languages, a limited amount of acoustic training data was available with a corresponding set of manually produced vowelized pronunciations. We augment this data with other sources, but lack pronunciations for unseen words in the new audio and associated text. Four methods are investigated for generating these pronunciations, or baseforms: an heuristic grapheme to phoneme map, manual annotation, and two methods based on statistical models. The first of these uses a joint Maximum Entropy N-gram model while the other is based on a log-linear Statistical Machine Translation model. We report results on a state of the art, discriminatively trained, ASR system and show that the manual and statistical methods provide an improvement over the grapheme to phoneme map. Moreover, we demonstrate that the automatic statistical methods can perform as well or better than manual generation by native speakers, even in the case where we have a significant number of high quality, manually generated pronunciations beyond those provided by the TRANSTAC program. © 2011 IEEE.