TY - JOUR
T1 - DMFP
T2 - Dynamic multiscale feature perturbations for transferable adversarial attacks
AU - Cheng, Shuyan
AU - Li, Peng
AU - Han, Keji
AU - Zheng, Yumiao
AU - Xu, He
AU - Yao, Yudong
N1 - Publisher Copyright:
© 2025 Elsevier B.V.
PY - 2025/11/25
Y1 - 2025/11/25
N2 - The transferability of adversarial samples facilitates adversarial attacks for the evaluation of the robustness of deep learning models, in which mitigating overfitting is of central importance for improving the transferability of adversarial samples. Current methods use regularization approaches to improve transferability without considering the degree of fitting of the adversarial perturbation and prior multiscale information of the source model during optimization, failing to find a flat minimum and improve generalization. This results in mutual inhibition of the attack capability and transferability. Therefore, our objective is to introduce the degree of fitting of the adversarial perturbation to dynamically regularize the multiscale feature for a better tradeoff between attack capability and transferability. In this paper, we propose dynamic multiscale feature perturbations (DMFP). Specifically, we investigate the properties of legitimate and adversarial features through qualitative visualization and quantitative distance metrics and devise multiscale feature perturbations (MFP). A combination of multiscale information and feature significance can perturb the salient features of a sample. In addition, we analyze the regularization effect produced by dropout in feature-level attacks and propose dynamic features (DF) to mitigate overfitting and enhance the generalization of adversarial samples by introducing gradient information. The experimental results demonstrate that DMFP significantly enhances the transferability of existing attack methods and achieves better performance than state-of-the-art methods, i.e., improving the success rate by 3.8 % against normally trained models and 12.8 % against defense models.
AB - The transferability of adversarial samples facilitates adversarial attacks for the evaluation of the robustness of deep learning models, in which mitigating overfitting is of central importance for improving the transferability of adversarial samples. Current methods use regularization approaches to improve transferability without considering the degree of fitting of the adversarial perturbation and prior multiscale information of the source model during optimization, failing to find a flat minimum and improve generalization. This results in mutual inhibition of the attack capability and transferability. Therefore, our objective is to introduce the degree of fitting of the adversarial perturbation to dynamically regularize the multiscale feature for a better tradeoff between attack capability and transferability. In this paper, we propose dynamic multiscale feature perturbations (DMFP). Specifically, we investigate the properties of legitimate and adversarial features through qualitative visualization and quantitative distance metrics and devise multiscale feature perturbations (MFP). A combination of multiscale information and feature significance can perturb the salient features of a sample. In addition, we analyze the regularization effect produced by dropout in feature-level attacks and propose dynamic features (DF) to mitigate overfitting and enhance the generalization of adversarial samples by introducing gradient information. The experimental results demonstrate that DMFP significantly enhances the transferability of existing attack methods and achieves better performance than state-of-the-art methods, i.e., improving the success rate by 3.8 % against normally trained models and 12.8 % against defense models.
KW - Adversarial example
KW - Black-box attack
KW - Deep learning
KW - Image classification
KW - Transferable attacks
UR - https://www.scopus.com/pages/publications/105017959909
UR - https://www.scopus.com/pages/publications/105017959909#tab=citedBy
U2 - 10.1016/j.knosys.2025.114469
DO - 10.1016/j.knosys.2025.114469
M3 - Article
AN - SCOPUS:105017959909
SN - 0950-7051
VL - 330
JO - Knowledge-Based Systems
JF - Knowledge-Based Systems
M1 - 114469
ER -