Graph neural networks (GNNs) are powerful graph-based deep-learning models that have been applied to solve a wide range of real-world applications, including natural language processing, drug discovery, and recommendation systems. However, combining feature information and combinatorial graph structures has led to complex non-linear GNN models. Consequently, this has increased the challenges of understanding the working mechanism of the GNN as well as its prediction accuracy. In recent years, researchers have started to build different explainer methods to understand GNNs. Explainable GNNs not only improve their security but also enhance trust in their recommendations. In this work, we provide a comprehensive survey of different explainer approaches involving GNNs on various graph related tasks. We categorize the literature based on the proposed explainer methods, the types of explanations, and the applications.