About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
AAAI 2020
Workshop paper
Learner-Independent Targeted Data Omission Attacks
Abstract
In this paper we introduce the —a new type of attack against learning mechanisms. The attack can be seen as a specific type of a poisoning attack. However, while poisoning attacks typically corrupt data in various ways including addition, omission and modification, to optimize the attack, we focus on omission only, which is much simpler to implement and analyze. A major advantage of our attack method is its generality. While poisoning attacks are usually optimized for a specific learner and prove ineffective against others, our attack is effective against a variety of learners. We demonstrate this effectiveness via a series of attack experiments against various learning mechanisms. We show that, with a relatively low attack budget, our omission attack succeeds regardless of the target learner.