Phase change memory (PCM)-based 'Analog-AI' accelerators are gaining importance for inference in edge applications due to the energy efficiency offered by in-memory computing. Nevertheless, noise sources inherent to PCM devices cause inaccuracies in the deep neural network (DNN) weight values. Such inaccuracies can lead to severe degradation in model accuracy. To address this, we propose two techniques to improve noise resiliency of DNNs: 1) drift regularization (DR) and 2) multiplicative noise training (MNT). We evaluate convolutional networks trained on image classification and recurrent neural networks trained on language modeling and show that our techniques improve model accuracy by up to 12% over one month.