TY - GEN
T1 - Enhancing Model Poisoning Attacks to Byzantine-Robust Federated Learning via Critical Learning Periods
AU - Yan, Gang
AU - Wang, Hao
AU - Yuan, Xu
AU - Li, Jian
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s).
PY - 2024/9/30
Y1 - 2024/9/30
N2 - Most existing model poisoning attacks in federated learning (FL) control a set of malicious clients and share a fixed number of malicious gradients with the server in each FL training round, to achieve a desired tradeoff between the attack impact and the attack budget. In this paper, we show that such a tradeoff is not fundamental and an adaptive attack budget not only improves the impact of attack A but also makes it more resilient to defenses. However, adaptively determining the number of malicious clients that share malicious gradients with the central server in each FL training round has been less investigated. This is due to the fact that most existing model poisoning attacks mainly focus on FL optimization itself to maximize the damage to the global model, and largely ignore the impact of the underlying deep neural networks that are used to train FL models. Inspired by recent findings on critical learning periods (CLP), where small gradient errors have irrecoverable impact on model accuracy, we advocate CLP augmented model poisoning attacks A-CLP in this paper. A-CLP merely augments an existing model poisoning attack A with an adaptive attack budget scheme. Specifically, A-CLP inspects the changes in federated gradient norms to identify CLP and adaptively adjusts the number of malicious clients that share their malicious gradients with the server in each round, leading to dramatically improved attack impact compared to A by up to 6.85×, with a smaller attack budget. This in turn improves the resilience of A by up to 2×. Since A-CLP is orthogonal to the attack A, it also crafts malicious gradients by solving a difficult optimization problem. To tackle this challenge and based on our understandings of A-CLP, we further relax the inner attack subroutine A in A-CLP and design GraSP, a lightweight CLP augmented similarity-based attack. We show that GraSP not only is more flexible but also achieves an improved attack impact compared to the strongest of existing model poisoning attacks.
AB - Most existing model poisoning attacks in federated learning (FL) control a set of malicious clients and share a fixed number of malicious gradients with the server in each FL training round, to achieve a desired tradeoff between the attack impact and the attack budget. In this paper, we show that such a tradeoff is not fundamental and an adaptive attack budget not only improves the impact of attack A but also makes it more resilient to defenses. However, adaptively determining the number of malicious clients that share malicious gradients with the central server in each FL training round has been less investigated. This is due to the fact that most existing model poisoning attacks mainly focus on FL optimization itself to maximize the damage to the global model, and largely ignore the impact of the underlying deep neural networks that are used to train FL models. Inspired by recent findings on critical learning periods (CLP), where small gradient errors have irrecoverable impact on model accuracy, we advocate CLP augmented model poisoning attacks A-CLP in this paper. A-CLP merely augments an existing model poisoning attack A with an adaptive attack budget scheme. Specifically, A-CLP inspects the changes in federated gradient norms to identify CLP and adaptively adjusts the number of malicious clients that share their malicious gradients with the server in each round, leading to dramatically improved attack impact compared to A by up to 6.85×, with a smaller attack budget. This in turn improves the resilience of A by up to 2×. Since A-CLP is orthogonal to the attack A, it also crafts malicious gradients by solving a difficult optimization problem. To tackle this challenge and based on our understandings of A-CLP, we further relax the inner attack subroutine A in A-CLP and design GraSP, a lightweight CLP augmented similarity-based attack. We show that GraSP not only is more flexible but also achieves an improved attack impact compared to the strongest of existing model poisoning attacks.
KW - Critical Learning Periods
KW - Federated Learning
KW - Model Poisoning Attacks
UR - http://www.scopus.com/inward/record.url?scp=85206579864&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85206579864&partnerID=8YFLogxK
U2 - 10.1145/3678890.3678915
DO - 10.1145/3678890.3678915
M3 - Conference contribution
AN - SCOPUS:85206579864
T3 - ACM International Conference Proceeding Series
SP - 496
EP - 512
BT - Proceedings of 27th International Symposium on Research in Attacks, Intrusions and Defenses, RAID 2024
T2 - 27th International Symposium on Research in Attacks, Intrusions and Defenses, RAID 2024
Y2 - 30 September 2024 through 2 October 2024
ER -