Our focus in this work is on the adaptation of eXplainable AI techniques for the interpretation of business process execution results. Such adaptation is required since conventional employment of such techniques involves a surrogate machine learning model that is trained on historical process execution logs. However, being a data-driven surrogate, its representation faithfulness of the real business process model affects the adequacy of the explanations derived from it. Hence, native use of such techniques is not ensured to be adhering to the target business process explained. We present a business-process-model-driven approach that extends LIME, a conventional machine-learning-model-agnostic eXplainable AI tool, to cope with business processes constraints that is replicable and reproducible. Our results show that our extended LIME approach produces correct and significantly more adequate explanations than the ones given by LIME as-is.