Enhancing Model Poisoning Attacks to Byzantine-Robust Federated Learning via Critical Learning Periods

Gang Yan, Hao Wang, Xu Yuan, Jian Li

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Most existing model poisoning attacks in federated learning (FL) control a set of malicious clients and share a fixed number of malicious gradients with the server in each FL training round, to achieve a desired tradeoff between the attack impact and the attack budget. In this paper, we show that such a tradeoff is not fundamental and an adaptive attack budget not only improves the impact of attack A but also makes it more resilient to defenses. However, adaptively determining the number of malicious clients that share malicious gradients with the central server in each FL training round has been less investigated. This is due to the fact that most existing model poisoning attacks mainly focus on FL optimization itself to maximize the damage to the global model, and largely ignore the impact of the underlying deep neural networks that are used to train FL models. Inspired by recent findings on critical learning periods (CLP), where small gradient errors have irrecoverable impact on model accuracy, we advocate CLP augmented model poisoning attacks A-CLP in this paper. A-CLP merely augments an existing model poisoning attack A with an adaptive attack budget scheme. Specifically, A-CLP inspects the changes in federated gradient norms to identify CLP and adaptively adjusts the number of malicious clients that share their malicious gradients with the server in each round, leading to dramatically improved attack impact compared to A by up to 6.85×, with a smaller attack budget. This in turn improves the resilience of A by up to 2×. Since A-CLP is orthogonal to the attack A, it also crafts malicious gradients by solving a difficult optimization problem. To tackle this challenge and based on our understandings of A-CLP, we further relax the inner attack subroutine A in A-CLP and design GraSP, a lightweight CLP augmented similarity-based attack. We show that GraSP not only is more flexible but also achieves an improved attack impact compared to the strongest of existing model poisoning attacks.

Original languageEnglish
Title of host publicationProceedings of 27th International Symposium on Research in Attacks, Intrusions and Defenses, RAID 2024
Pages496-512
Number of pages17
ISBN (Electronic)9798400709593
DOIs
StatePublished - 30 Sep 2024
Event27th International Symposium on Research in Attacks, Intrusions and Defenses, RAID 2024 - Padua, Italy
Duration: 30 Sep 20242 Oct 2024

Publication series

NameACM International Conference Proceeding Series

Conference

Conference27th International Symposium on Research in Attacks, Intrusions and Defenses, RAID 2024
Country/TerritoryItaly
CityPadua
Period30/09/242/10/24

Keywords

  • Critical Learning Periods
  • Federated Learning
  • Model Poisoning Attacks

Fingerprint

Dive into the research topics of 'Enhancing Model Poisoning Attacks to Byzantine-Robust Federated Learning via Critical Learning Periods'. Together they form a unique fingerprint.

Cite this