TY - GEN
T1 - Group Property Inference Attacks Against Graph Neural Networks
AU - Wang, Xiuling
AU - Wang, Wendy Hui
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/11/7
Y1 - 2022/11/7
N2 - Recent research has shown that machine learning (ML) models are vulnerable to privacy attacks that leak information about the training data. In this work, we consider Graph Neural Networks (GNNs) as the target model, and focus on a particular type of privacy attack named property inference attack (PIA) which infers the sensitive properties of the training graph through the access to GNNs. While the existing work has investigated PIAs against graph-level properties (e.g., node degree and graph density), we are the first to perform a systematic study of the group property inference attacks (GPIAs) that infer the distribution of particular groups of nodes and links (e.g., there are more links between male nodes than those between female nodes) in the training graph. First, we consider a taxonomy of threat models with various types of adversary knowledge, and design six different attacks for these settings. Second, we demonstrate the effectiveness of these attacks through extensive experiments on three representative GNN models and three real-world graphs. Third, we analyze the underlying factors that contribute to GPIA's success, and show that the GNN model trained on the graphs with or without the target property represents some dissimilarity in model parameters and/or model outputs, which enables the adversary to infer the existence of the property. Further, we design a set of defense mechanisms against the GPIA attacks, and demonstrate empirically that these mechanisms can reduce attack accuracy effectively with small loss on GNN model accuracy.
AB - Recent research has shown that machine learning (ML) models are vulnerable to privacy attacks that leak information about the training data. In this work, we consider Graph Neural Networks (GNNs) as the target model, and focus on a particular type of privacy attack named property inference attack (PIA) which infers the sensitive properties of the training graph through the access to GNNs. While the existing work has investigated PIAs against graph-level properties (e.g., node degree and graph density), we are the first to perform a systematic study of the group property inference attacks (GPIAs) that infer the distribution of particular groups of nodes and links (e.g., there are more links between male nodes than those between female nodes) in the training graph. First, we consider a taxonomy of threat models with various types of adversary knowledge, and design six different attacks for these settings. Second, we demonstrate the effectiveness of these attacks through extensive experiments on three representative GNN models and three real-world graphs. Third, we analyze the underlying factors that contribute to GPIA's success, and show that the GNN model trained on the graphs with or without the target property represents some dissimilarity in model parameters and/or model outputs, which enables the adversary to infer the existence of the property. Further, we design a set of defense mechanisms against the GPIA attacks, and demonstrate empirically that these mechanisms can reduce attack accuracy effectively with small loss on GNN model accuracy.
KW - graph neural networks
KW - privacy attacks and defense
KW - property inference attack
KW - trustworthy machine learning
UR - http://www.scopus.com/inward/record.url?scp=85143081206&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85143081206&partnerID=8YFLogxK
U2 - 10.1145/3548606.3560662
DO - 10.1145/3548606.3560662
M3 - Conference contribution
AN - SCOPUS:85143081206
T3 - Proceedings of the ACM Conference on Computer and Communications Security
SP - 2871
EP - 2884
BT - CCS 2022 - Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security
T2 - 28th ACM SIGSAC Conference on Computer and Communications Security, CCS 2022
Y2 - 7 November 2022 through 11 November 2022
ER -