Explainability is the degree to which a human can understand the cause of a decision made by a model. Various notions of explainability has been studied in supervised learning paradigms like classification and regression. In this work we formalise the notions of local and global explanations in the context of time series forecasting. We propose a robust interpretable feature based algorithm to explain the forecast of any forecaster. The method is model agnostic and needs access to only the fit and forecast methods. We evaluate the explanations in terms of sensitivity, faithfulness and complexity. For robustness we aggregate multiple explanations from bootstrapped versions of the time series.