Deep learning (DL) techniques have recently been proposed for enhancing the accuracy of network intrusion detection systems (NIDS). However, keeping the DL based detection models up to date requires large amounts of new labeled training data which is often expensive and time-consuming to collect. In this paper, we investigate the viability of transfer learning (TL), an approach that enables transferring learned features and knowledge from a trained source model to a target model with minimal new training data. We compare the performance of a NIDS model trained using TL with a NIDS model trained from scratch. We show that TL enables detection models to perform much better at identifying new attacks when there is relatively less training data available.