TY - GEN
T1 - MulDoor
T2 - 2024 IEEE Global Communications Conference, GLOBECOM 2024
AU - Li, Xuan
AU - Wu, Longfei
AU - Guan, Zhitao
AU - Du, Xiaojiang
AU - Aitsaadi, Nadjib
AU - Guizani, Mohsen
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - In recent years, with the development of wireless communication networks, federated learning (FL) has been widely deployed in distributed scenarios as a privacy-preserving machine learning paradigm. Due to its inherent features, FL shows vulnerability to backdoor attacks. In a backdoor attack, an adversary manipulates the global model's output by compromising the model of one or multiple participants. Existing backdoor attacks are constrained to outputting a single specified target label during the inference phase, limiting the adversary's flexibility to alter the model's output when different target labels are required. In this paper, we study the multi-target attack scenario within the federated learning context, where the adversary aims to manipulate the global model to output various specified labels by inserting different types of triggers. To effectively insert multiple backdoors simultaneously without reducing the attack's effectiveness, we propose MulDoor, a novel multi-target backdoor attack scheme. MulDoor incorporates the concept of supervised contrastive learning to learn the discrepancies among different types of triggers and mitigate interference between them. The experimental results demonstrate that MulDoor achieves better attack effectiveness compared to existing backdoor attacks in a multi-target backdoor attack setting.
AB - In recent years, with the development of wireless communication networks, federated learning (FL) has been widely deployed in distributed scenarios as a privacy-preserving machine learning paradigm. Due to its inherent features, FL shows vulnerability to backdoor attacks. In a backdoor attack, an adversary manipulates the global model's output by compromising the model of one or multiple participants. Existing backdoor attacks are constrained to outputting a single specified target label during the inference phase, limiting the adversary's flexibility to alter the model's output when different target labels are required. In this paper, we study the multi-target attack scenario within the federated learning context, where the adversary aims to manipulate the global model to output various specified labels by inserting different types of triggers. To effectively insert multiple backdoors simultaneously without reducing the attack's effectiveness, we propose MulDoor, a novel multi-target backdoor attack scheme. MulDoor incorporates the concept of supervised contrastive learning to learn the discrepancies among different types of triggers and mitigate interference between them. The experimental results demonstrate that MulDoor achieves better attack effectiveness compared to existing backdoor attacks in a multi-target backdoor attack setting.
KW - Backdoor Attack
KW - Contrastive Learning
KW - Federated Learning
KW - Multi-target
UR - https://www.scopus.com/pages/publications/105000820964
UR - https://www.scopus.com/inward/citedby.url?scp=105000820964&partnerID=8YFLogxK
U2 - 10.1109/GLOBECOM52923.2024.10901170
DO - 10.1109/GLOBECOM52923.2024.10901170
M3 - Conference contribution
AN - SCOPUS:105000820964
T3 - Proceedings - IEEE Global Communications Conference, GLOBECOM
SP - 1749
EP - 1754
BT - GLOBECOM 2024 - 2024 IEEE Global Communications Conference
Y2 - 8 December 2024 through 12 December 2024
ER -