Recently, there exist growing concerns about the interpretability of deep learning models. While few of these models have been applied to a duplicate question detection task, which aims at finding semantically equivalent question pairs of question answering forum. In this paper, based on an attention mechanism, we propose two modularized interpretable deep neural network models for such tasks. During the word precessing procedure, a filter operation is employed to enhance the relevant information contained in the pre-trained word embeddings. Regarding the word matching and sentence representation process, vanilla attention and structured attention mechanisms are utilized, respectively. Benefiting from the interpretability of attention techniques, our models can illustrate how the words match between sentence pairs and what aspects of the sentences are extracted to have an effect on the final decision. The attention visualization furnishes us with detailed representation at word and sentence level. And experimental results show that our models are comparable with other reported models.