Marina Danilevsky, Shipi Dhanorkar, et al.
KDD 2021
Advanced machine learning models have become widely adopted in various domains due to their exceptional performance. However, their complexity often renders them difficult to interpret, which can be a significant limitation in high-stakes decision-making scenarios where explainability is crucial. In this study we propose eXplainable Random Forest (XRF), an extension of the Random Forest model that takes into consideration, crucially, during training, explainability constraints stemming from the users’ view of the problem and its feature space. While numerous methods have been suggested for explaining machine learning models, these methods often are suitable for use only after the model has been trained. Furthermore, the explanations provided by these methods may include features that are not human-understandable, which in turn may hinder the user’s comprehension of the model’s reasoning. Our proposed method addresses these two limitations. We apply our proposed method to six public benchmark datasets in a systematic way and demonstrate that XRF models manage to balance the trade-off between the models’ performance and the users’ explainability constraints.
Marina Danilevsky, Shipi Dhanorkar, et al.
KDD 2021
Gentiana Rashiti, Kumudu Geethan Karunaratne, et al.
ECAI 2024
Lingfei Wu, Jian Pei, et al.
AAAI 2023
Erick Oduor, Kun Qian, et al.
IUI 2020