Visual Question Answering aims to answer the freeform natural language question based on the visual clues in a given image. It is a difficult problem as it requires understanding the fine-grained structured information of both language and image for compositional reasoning. To establish the compositional reasoning, recent works attempt to introduce the scene graph in VQA. However, as the generated scene graphs are usually quite noisy, it greatly limits the performance of question answering. Therefore, this paper proposes to refine the scene graphs for improving their effectiveness. Specifically, we present a novel Scene Graph Refinement network (SGR), which introduces a transformer-based refinement network to enhance the object and relation features for better classification. Moreover, as the question provides valuable clues for distinguishing whether the subject, predicate, and object triplets are helpful or not, the SGR network exploits the semantic information presented in the questions to select the most relevant relations for question answering. Extensive experiments are conducted on the GQA benchmark to demonstrate the effectiveness of our method.