PSBD: Prediction Shift Uncertainty Unlocks Backdoor Detection
Wei Li, Pin-Yu Chen, et al.
CVPR 2025
Recent advances in Machine Learning (ML) have profoundly changed many detection, classification, recognition and inference tasks. Given the complexity of the battlespace, ML has the potential to revolutionise how Coalition Situation Understanding is synthesised and revised. However, many issues must be overcome before its widespread adoption. In this paper we consider two - interpretability and adversarial attacks. Interpretability is needed because military decision-makers must be able to justify their decisions. Adversarial attacks arise because many ML algorithms are very sensitive to certain kinds of input perturbations. In this paper, we argue that these two issues are conceptually linked, and insights in one can provide insights in the other. We illustrate these ideas with relevant examples from the literature and our own experiments.
Wei Li, Pin-Yu Chen, et al.
CVPR 2025
Linbo Liu, Trong Nghia Hoang, et al.
ICLR 2022
Pratik Vaishnavi, Kevin Eykholt, et al.
USENIX Security 2022
Yong Xie, Dakuo Wang, et al.
NAACL 2022