TY - GEN
T1 - Link Membership Inference Attacks against Unsupervised Graph Representation Learning
AU - Wang, Xiuling
AU - Wang, Wendy Hui
N1 - Publisher Copyright:
© 2023 ACM.
PY - 2023/12/4
Y1 - 2023/12/4
N2 - Significant advancements have been made in recent years in the field of unsupervised graph representation learning (UGRL) approaches. UGRL involves representing large graphs as low-dimensional vectors, commonly referred to as embeddings. These embeddings can be publicly released or shared with third parties for downstream analytics. However, adversaries can deduce sensitive structural information from the target graph through its embedding using various types of privacy inference attacks. This paper investigates the privacy vulnerabilities of UGRL models through the lens of link membership inference attack (LMIA). Specifically, an LMIA adversary aims to infer whether any two nodes are connected in the target graph from the node embeddings generated by a UGRL model. To achieve this, we propose two LMIA attacks that leverage the properties of node embeddings and various forms of adversary knowledge for inference. By conducting experiments on four state-of-the-art UGRL models using five real-world graph datasets, we demonstrate the effectiveness of the two LMIA attacks against these UGRL models. Furthermore, we conduct a comprehensive analysis to examine how varying degrees of preserved structural information in the embeddings impact the performance of LMIA. To enhance the security of UGRL models against LMIA, we design a family of defense mechanisms that perturb the least significant dimensions of embeddings. Our experimental results show that our defense mechanism achieves a favorable balance between defense effectiveness and embedding quality.
AB - Significant advancements have been made in recent years in the field of unsupervised graph representation learning (UGRL) approaches. UGRL involves representing large graphs as low-dimensional vectors, commonly referred to as embeddings. These embeddings can be publicly released or shared with third parties for downstream analytics. However, adversaries can deduce sensitive structural information from the target graph through its embedding using various types of privacy inference attacks. This paper investigates the privacy vulnerabilities of UGRL models through the lens of link membership inference attack (LMIA). Specifically, an LMIA adversary aims to infer whether any two nodes are connected in the target graph from the node embeddings generated by a UGRL model. To achieve this, we propose two LMIA attacks that leverage the properties of node embeddings and various forms of adversary knowledge for inference. By conducting experiments on four state-of-the-art UGRL models using five real-world graph datasets, we demonstrate the effectiveness of the two LMIA attacks against these UGRL models. Furthermore, we conduct a comprehensive analysis to examine how varying degrees of preserved structural information in the embeddings impact the performance of LMIA. To enhance the security of UGRL models against LMIA, we design a family of defense mechanisms that perturb the least significant dimensions of embeddings. Our experimental results show that our defense mechanism achieves a favorable balance between defense effectiveness and embedding quality.
KW - graph learning
KW - machine learning privacy
KW - membership inference attack
KW - trustworthy machine learning.
KW - Unsupervised graph representation learning
UR - http://www.scopus.com/inward/record.url?scp=85180157560&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85180157560&partnerID=8YFLogxK
U2 - 10.1145/3627106.3627115
DO - 10.1145/3627106.3627115
M3 - Conference contribution
AN - SCOPUS:85180157560
T3 - ACM International Conference Proceeding Series
SP - 477
EP - 491
BT - Proceedings - 39th Annual Computer Security Applications Conference, ACSAC 2023
T2 - 39th Annual Computer Security Applications Conference, ACSAC 2023
Y2 - 4 December 2023 through 8 December 2023
ER -