About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
KDD 2020
Workshop paper
Identifying Audio Adversarial Examples via Anomalous Pattern Detection
Abstract
Audio processing models based on deep neural networks are susceptible to adversarial attacks even when the adversarial audio waveform is 99.9% similar to a benign sample. Given the wide range of applications of DNN-based audio recognition systems from automotives to virtual assistants, detecting the presence of adversarial examples is of high practical relevance. We propose a method to detect audio adversarial samples. Employing anomalous pattern detection techniques in the activation space of these models, we show that 2 of the recent and current state-of-the-art adversarial attacks on audio processing systems systematically lead to higher-than-expected activation at some subset of nodes and we can detect these with up to an AUC of 0.98 with no degradation in performance on benign samples. Furthermore, our work strengthens the study of properties of adversarial examples that hold across multiple domains.