Publication
DAC 2021
Conference paper

AID: Attesting the Integrity of Deep Neural Networks

View publication

Abstract

Due to their crucial role in many decision-making tasks, Deep Neural Networks (DNNs) are common targets for a large array of integrity breaches. In this paper, we propose AID, a novel methodology to validate the integrity of DNNs. AID generates a set of test cases called edge-points that can reveal if a model has been compromised with access to the top-1 prediction output. Experimental results show that AID is highly effective and reliable. With at most four edge-points, AID is able to detect eight representative integrity breaches including backdoor, poisoning, and compression attacks, with zero false-positive.

Date

05 Dec 2021

Publication

DAC 2021

Authors

Share