FedRoLA: Robust Federated Learning Against Model Poisoning via Layer-based Aggregation

Gang Yan, Hao Wang, Xu Yuan, Jian Li

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Federated Learning (FL) is increasingly vulnerable to model poisoning attacks, where malicious clients degrade the global model's accuracy with manipulated updates. Unfortunately, most existing defenses struggle to handle the scenarios when multiple adversaries exist, and often rely on historical or validation data, rendering them ill-suited for the dynamic and diverse nature of real-world FL environments. Exacerbating these limitations is the fact that most existing defenses also fail to account for the distinctive contributions of Deep Neural Network (DNN) layers in detecting malicious activity, leading to the unnecessary rejection of benign updates. To bridge these gaps, we introduce FedRoLa, a cutting-edge similarity-based defense method optimized for FL. Specifically, FedRoLa leverages global model parameters and client updates independently, moving away from reliance on historical or validation data. It features a unique layer-based aggregation with dynamic layer selection, enhancing threat detection, and includes a dynamic probability method for balanced security and model performance. Through comprehensive evaluations using different DNN models and real-world datasets, FedRoLa demonstrates substantial improvements over the status quo approaches in global model accuracy, achieving up to 4% enhancement in terms of accuracy, reducing false positives to 6.4%, and securing an 92.8% true positive rate.

Original languageEnglish
Title of host publicationKDD 2024 - Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
Pages3667-3678
Number of pages12
ISBN (Electronic)9798400704901
DOIs
StatePublished - 24 Aug 2024
Event30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2024 - Barcelona, Spain
Duration: 25 Aug 202429 Aug 2024

Publication series

NameProceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
ISSN (Print)2154-817X

Conference

Conference30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2024
Country/TerritorySpain
CityBarcelona
Period25/08/2429/08/24

Keywords

  • cosine similarity
  • defense
  • layer-based algorithm
  • model poisoning attacks
  • robust federated learning

Fingerprint

Dive into the research topics of 'FedRoLA: Robust Federated Learning Against Model Poisoning via Layer-based Aggregation'. Together they form a unique fingerprint.

Cite this