Recently several deep learning based models have been proposed for end-to-end learning of dialogs. While these models can be trained from data without the need for any additional annotations, it is hard to interpret them. On the other hand, there exist traditional state based dialog systems, where the states of the dialog are discrete and hence easy to interpret. However these states need to be hand-crafted and annotated in the data. To achieve the best of both worlds, we propose Latent State Tracking Network (LSTN) using which we learn an interpretable model in unsupervised manner. The model defines a discrete latent variable at each turn of the conversation which can take a finite set of values. These variables correspond to the state of the dialog after each turn. Since the conversations are not labelled with the dialog states, we use EM algorithm to train our model in unsupervised manner. In the experiments, we show that LSTN can help achieve inter- pretability in dialog models with performance comprable to end-to-end approaches. This interpretability allows us to edit the model and improve the same.