TY - GEN
T1 - Eluding Secure Aggregation in Federated Learning via Model Inconsistency
AU - Pasquini, Dario
AU - Francati, Danilo
AU - Ateniese, Giuseppe
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/11/7
Y1 - 2022/11/7
N2 - Secure aggregation is a cryptographic protocol that securely computes the aggregation of its inputs. It is pivotal in keeping model updates private in federated learning. Indeed, the use of secure aggregation prevents the server from learning the value and the source of the individual model updates provided by the users, hampering inference and data attribution attacks. In this work, we show that a malicious server can easily elude secure aggregation as if the latter were not in place. We devise two different attacks capable of inferring information on individual private training datasets, independently of the number of users participating in the secure aggregation. This makes them concrete threats in large-scale, real-world federated learning applications. The attacks are generic and equally effective regardless of the secure aggregation protocol used They exploit a vulnerability of the federated learning protocol caused by incorrect usage of secure aggregation and lack of parameter validation. Our work demonstrates that current implementations of federated learning with secure aggregation offer only a ''false sense of security.''
AB - Secure aggregation is a cryptographic protocol that securely computes the aggregation of its inputs. It is pivotal in keeping model updates private in federated learning. Indeed, the use of secure aggregation prevents the server from learning the value and the source of the individual model updates provided by the users, hampering inference and data attribution attacks. In this work, we show that a malicious server can easily elude secure aggregation as if the latter were not in place. We devise two different attacks capable of inferring information on individual private training datasets, independently of the number of users participating in the secure aggregation. This makes them concrete threats in large-scale, real-world federated learning applications. The attacks are generic and equally effective regardless of the secure aggregation protocol used They exploit a vulnerability of the federated learning protocol caused by incorrect usage of secure aggregation and lack of parameter validation. Our work demonstrates that current implementations of federated learning with secure aggregation offer only a ''false sense of security.''
KW - federated learning
KW - model inconsistency
KW - secure aggregation
UR - http://www.scopus.com/inward/record.url?scp=85143051304&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85143051304&partnerID=8YFLogxK
U2 - 10.1145/3548606.3560557
DO - 10.1145/3548606.3560557
M3 - Conference contribution
AN - SCOPUS:85143051304
T3 - Proceedings of the ACM Conference on Computer and Communications Security
SP - 2429
EP - 2443
BT - CCS 2022 - Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security
T2 - 28th ACM SIGSAC Conference on Computer and Communications Security, CCS 2022
Y2 - 7 November 2022 through 11 November 2022
ER -