In this paper we focus on several techniques that improve deep neural network (DNN) acoustic modeling for low-resource languages. We explore the use of different features such as, fundamental-frequency variation (FFV), tonal features, and normalization of these features for deep neural network training. Specifically we study the impact of these features in conjunction with a tonal lexicon and several neural network architectures including hybrid and bottleneck feature-based configurations. We also explore the use of un-transcribed data and ways to balance it with transcribed data, to enhance the performance of the best performing LVCSR system. Results are presented in the context of the IARPA Babel program on development languages from Babel option period as well as on the surprise language from the base period of the program. We show that these improved methods can provide up to 15% relative reduction in WER and improvements in keyword search, in the languages explored under the BABEL program.