Group Property Inference Attacks Against Graph Neural Networks

Xiuling Wang, Wendy Hui Wang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

16 Scopus citations

Abstract

Recent research has shown that machine learning (ML) models are vulnerable to privacy attacks that leak information about the training data. In this work, we consider Graph Neural Networks (GNNs) as the target model, and focus on a particular type of privacy attack named property inference attack (PIA) which infers the sensitive properties of the training graph through the access to GNNs. While the existing work has investigated PIAs against graph-level properties (e.g., node degree and graph density), we are the first to perform a systematic study of the group property inference attacks (GPIAs) that infer the distribution of particular groups of nodes and links (e.g., there are more links between male nodes than those between female nodes) in the training graph. First, we consider a taxonomy of threat models with various types of adversary knowledge, and design six different attacks for these settings. Second, we demonstrate the effectiveness of these attacks through extensive experiments on three representative GNN models and three real-world graphs. Third, we analyze the underlying factors that contribute to GPIA's success, and show that the GNN model trained on the graphs with or without the target property represents some dissimilarity in model parameters and/or model outputs, which enables the adversary to infer the existence of the property. Further, we design a set of defense mechanisms against the GPIA attacks, and demonstrate empirically that these mechanisms can reduce attack accuracy effectively with small loss on GNN model accuracy.

Original languageEnglish
Title of host publicationCCS 2022 - Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security
Pages2871-2884
Number of pages14
ISBN (Electronic)9781450394505
DOIs
StatePublished - 7 Nov 2022
Event28th ACM SIGSAC Conference on Computer and Communications Security, CCS 2022 - Los Angeles, United States
Duration: 7 Nov 202211 Nov 2022

Publication series

NameProceedings of the ACM Conference on Computer and Communications Security
ISSN (Print)1543-7221

Conference

Conference28th ACM SIGSAC Conference on Computer and Communications Security, CCS 2022
Country/TerritoryUnited States
CityLos Angeles
Period7/11/2211/11/22

Keywords

  • graph neural networks
  • privacy attacks and defense
  • property inference attack
  • trustworthy machine learning

Fingerprint

Dive into the research topics of 'Group Property Inference Attacks Against Graph Neural Networks'. Together they form a unique fingerprint.

Cite this