Graph neural networks (GNNs) have achieved great success in graph representation learning, which has tremendously facilitated various real-world applications. Nevertheless, the performance of GNNs significantly deteriorates when the depth increases. Recent researches have attributed this phenomenon to the oversmoothing issue, which indicates that the learned node representations are highly indistinguishable. In this paper, we observe a new issue in deeper GNNs, i.e., feature overcorrelation, and perform a thorough study to deepen our understanding on this issue. In particular, we demonstrate the existence of feature overcorrelation in deeper GNNs, reveal potential reasons leading to this issue, and validate that overcorrelation and oversmoothing present different patterns though they are related. Since feature overcorrelation indicates that GNNs encode less information and can harm the downstream tasks, it is of great significance to mitigate this issue. Therefore, we propose the DeCorr, a general framework to effectively reduce feature correlation for deeper GNNs. Experimental results on various datasets demonstrate that DeCorr can help train deeper GNNs effectively and is complementary to methods tackling oversmoothing.