Abstract
The emergence of adversarial examples has aroused widespread attention to the safety of deep learning. Most recent research focuses on how to obtain adversarial examples which make networks’ predictions wrong, and rarely observe the changes in feature embedding space from the perspective of interpretability. In addition, researchers have proposed various attack algorithms for a single task, but there are few general methods that can perform multiple tasks at the same time, such as image classification, object detection, and face recognition. To resolve these issues, we propose a new attack algorithm CAMA for deep neural networks (DNNs). CAMA perturbs each feature extraction layer through adaptive feature measurement function, thereby disrupting the predicted class activation mapping of DNNs. Experiments show that CAMA is good at creating white-box adversarial examples on classification networks and has the highest attack success rate. To solve the problem of the disappearance of aggression caused by image transformation, we propose spread-spectrum compression CAMA, which achieve a better attack success rate under various defensive measures. In addition, we successfully attack face recognition networks and object detection networks using CAMA, and achieve excellent performance. It verifies that our algorithm is a general attack algorithm for attacking different tasks.
| Original language | English |
|---|---|
| Pages (from-to) | 989-1002 |
| Number of pages | 14 |
| Journal | Neurocomputing |
| Volume | 500 |
| DOIs | |
| State | Published - 21 Aug 2022 |
Keywords
- Adversarial attack
- Deep neural networks
- Image classification
- Multi-task attack
- White-box attack
Fingerprint
Dive into the research topics of 'CAMA: Class activation mapping disruptive attack for deep neural networks'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver