Improving the Performance and Explainability of Mammogram Classifiers with Local Annotations
Abstract
Cancer prediction models, which deeply impact human lives, must provide explanations for their predictions. We study a simple extension of a cancer mammogram classifier, trained with image-level annotations, to facilitate the built-in generation of prediction explanations. This extension also enables the classifier to learn from local annotations of malignant findings, if such are available. We tested this extended classifier for different percentages of local annotations in the training data. We evaluated the generated explanations by their level of agreement with (i) local annotations of malignant findings, and (ii) perturbation-based explanations, produced by the LIME method, which estimates the effect of each image segment on the classification score. Our results demonstrate an improvement in classification performance and explainability when local an- notations are added to the training data. We observe that training with only 20- 40% of the local annotations is sufficient to achieve improved performance and explainability comparable to a classifier trained with the entire set of local annotations.