TY - GEN
T1 - FedRoLA
T2 - 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2024
AU - Yan, Gang
AU - Wang, Hao
AU - Yuan, Xu
AU - Li, Jian
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.
PY - 2024/8/24
Y1 - 2024/8/24
N2 - Federated Learning (FL) is increasingly vulnerable to model poisoning attacks, where malicious clients degrade the global model's accuracy with manipulated updates. Unfortunately, most existing defenses struggle to handle the scenarios when multiple adversaries exist, and often rely on historical or validation data, rendering them ill-suited for the dynamic and diverse nature of real-world FL environments. Exacerbating these limitations is the fact that most existing defenses also fail to account for the distinctive contributions of Deep Neural Network (DNN) layers in detecting malicious activity, leading to the unnecessary rejection of benign updates. To bridge these gaps, we introduce FedRoLa, a cutting-edge similarity-based defense method optimized for FL. Specifically, FedRoLa leverages global model parameters and client updates independently, moving away from reliance on historical or validation data. It features a unique layer-based aggregation with dynamic layer selection, enhancing threat detection, and includes a dynamic probability method for balanced security and model performance. Through comprehensive evaluations using different DNN models and real-world datasets, FedRoLa demonstrates substantial improvements over the status quo approaches in global model accuracy, achieving up to 4% enhancement in terms of accuracy, reducing false positives to 6.4%, and securing an 92.8% true positive rate.
AB - Federated Learning (FL) is increasingly vulnerable to model poisoning attacks, where malicious clients degrade the global model's accuracy with manipulated updates. Unfortunately, most existing defenses struggle to handle the scenarios when multiple adversaries exist, and often rely on historical or validation data, rendering them ill-suited for the dynamic and diverse nature of real-world FL environments. Exacerbating these limitations is the fact that most existing defenses also fail to account for the distinctive contributions of Deep Neural Network (DNN) layers in detecting malicious activity, leading to the unnecessary rejection of benign updates. To bridge these gaps, we introduce FedRoLa, a cutting-edge similarity-based defense method optimized for FL. Specifically, FedRoLa leverages global model parameters and client updates independently, moving away from reliance on historical or validation data. It features a unique layer-based aggregation with dynamic layer selection, enhancing threat detection, and includes a dynamic probability method for balanced security and model performance. Through comprehensive evaluations using different DNN models and real-world datasets, FedRoLa demonstrates substantial improvements over the status quo approaches in global model accuracy, achieving up to 4% enhancement in terms of accuracy, reducing false positives to 6.4%, and securing an 92.8% true positive rate.
KW - cosine similarity
KW - defense
KW - layer-based algorithm
KW - model poisoning attacks
KW - robust federated learning
UR - http://www.scopus.com/inward/record.url?scp=85202281587&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85202281587&partnerID=8YFLogxK
U2 - 10.1145/3637528.3671906
DO - 10.1145/3637528.3671906
M3 - Conference contribution
AN - SCOPUS:85202281587
T3 - Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
SP - 3667
EP - 3678
BT - KDD 2024 - Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
Y2 - 25 August 2024 through 29 August 2024
ER -