Several explainable AI algorithms have been proposed to help make machine learning models more interpretable and trustworthy. However in spite of numerous methodological advancements, there is still a persistent gap between what researchers develop and what business users seek. In this work, we aim to bridge this gap for an AI system that predicts the remaining useful life of an aircraft's engine using time series data collected from multiple sensors. We propose a novel approach to compute easily understandable explanations by fusing two explainers in sequence wherein explanations of the first explainer are explained by the second. We use this approach to build a global post-hoc model-agnostic explainer for AI models that ingest multivariate time series data. Our approach fuses a local explainer that yields feature importance weights, with a directly interpretable model that outputs global rules. Our experimental results based on the C-MAPSS open-source dataset demonstrate that the proposed two-stage explainer computes global explanations that are amenable to business users and sheds light on how the behavior of an individual and a group of sensors impacts the remaining useful life of an aircraft's engine.