TY - JOUR
T1 - CAMA
T2 - Class activation mapping disruptive attack for deep neural networks
AU - Sun, Sainan
AU - Song, Bin
AU - Cai, Xiaohui
AU - Du, Xiaojiang
AU - Guizani, Mohsen
N1 - Publisher Copyright:
© 2022 Elsevier B.V.
PY - 2022/8/21
Y1 - 2022/8/21
N2 - The emergence of adversarial examples has aroused widespread attention to the safety of deep learning. Most recent research focuses on how to obtain adversarial examples which make networks’ predictions wrong, and rarely observe the changes in feature embedding space from the perspective of interpretability. In addition, researchers have proposed various attack algorithms for a single task, but there are few general methods that can perform multiple tasks at the same time, such as image classification, object detection, and face recognition. To resolve these issues, we propose a new attack algorithm CAMA for deep neural networks (DNNs). CAMA perturbs each feature extraction layer through adaptive feature measurement function, thereby disrupting the predicted class activation mapping of DNNs. Experiments show that CAMA is good at creating white-box adversarial examples on classification networks and has the highest attack success rate. To solve the problem of the disappearance of aggression caused by image transformation, we propose spread-spectrum compression CAMA, which achieve a better attack success rate under various defensive measures. In addition, we successfully attack face recognition networks and object detection networks using CAMA, and achieve excellent performance. It verifies that our algorithm is a general attack algorithm for attacking different tasks.
AB - The emergence of adversarial examples has aroused widespread attention to the safety of deep learning. Most recent research focuses on how to obtain adversarial examples which make networks’ predictions wrong, and rarely observe the changes in feature embedding space from the perspective of interpretability. In addition, researchers have proposed various attack algorithms for a single task, but there are few general methods that can perform multiple tasks at the same time, such as image classification, object detection, and face recognition. To resolve these issues, we propose a new attack algorithm CAMA for deep neural networks (DNNs). CAMA perturbs each feature extraction layer through adaptive feature measurement function, thereby disrupting the predicted class activation mapping of DNNs. Experiments show that CAMA is good at creating white-box adversarial examples on classification networks and has the highest attack success rate. To solve the problem of the disappearance of aggression caused by image transformation, we propose spread-spectrum compression CAMA, which achieve a better attack success rate under various defensive measures. In addition, we successfully attack face recognition networks and object detection networks using CAMA, and achieve excellent performance. It verifies that our algorithm is a general attack algorithm for attacking different tasks.
KW - Adversarial attack
KW - Deep neural networks
KW - Image classification
KW - Multi-task attack
KW - White-box attack
UR - http://www.scopus.com/inward/record.url?scp=85132407662&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85132407662&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2022.05.065
DO - 10.1016/j.neucom.2022.05.065
M3 - Article
AN - SCOPUS:85132407662
SN - 0925-2312
VL - 500
SP - 989
EP - 1002
JO - Neurocomputing
JF - Neurocomputing
ER -