We propose Quantized Dialog, a novel approach for the development of conversational systems. The methodology relies on the semantic quantization and clustering of the dialog utterances in order to reduce the dialog interaction space, making prediction of the next utterance more tractable. The effectiveness of this method is showcased using the goal-oriented dataset of the sixth Dialog System Technology Challenge (DSTC6). We compare the performance of Quantized Dialog based on an n-gram language model for next-utterance prediction against other models that employ popular deep-learning architectures, such as multi-layer neural network classifiers, memory networks, long short-term memory recurrent neural networks and convolutional neural networks. The experimental results demonstrate the promising potential of the new quantized approach in goal-oriented dialog prediction.