Verification of Incomplete Graph Unlearning through Adversarial Perturbations

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Graph unlearning (GU) enables data owners to remove specific data from a trained Graph Neural Network (GNN). However, a dishonest model provider may cheat on the unlearning process. This paper focuses on a specific type of cheating behavior in GU, namely incomplete edge unlearning, where the model provider removes only a subset of the requested edges from the trained GNN. We introduce PANDA, the first probabilistic GU verification framework, to detect such cheating behaviors. PANDA identifies a set of nodes, called token nodes, in the graph, and injects a set of fake edges, referred to as challenge edges, into the model to manipulate the predictions of token nodes. A key property of the challenge edges is that removing only a subset of them from the poisoned model does not change the token nodes' prediction. Then by requesting the removal of the challenge edges and observing the change in the token nodes' predictions, the verifier can assess the likelihood that the model provider has engaged in incomplete unlearning. To develop PANDA, we design a novel algorithm to identify the token nodes and generate their associated challenge edges. We rigorously quantify the verification probabilities achieved by PANDA. Our extensive empirical studies demonstrate the efficiency and effectiveness of PANDA in detecting incomplete edge unlearning across a variety of GNN models and unlearning algorithms. Furthermore, we show that PANDA exhibits strong robustness against state-of-the-art detection methods for graph adversarial perturbations. Our code and datasets are available at https://github.com/kunwu522/unlearning-verification-gnn.

Original languageEnglish
Title of host publicationKDD 2025 - Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining
Pages3204-3215
Number of pages12
ISBN (Electronic)9798400714542
DOIs
StatePublished - 3 Aug 2025
Event31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2025 - Toronto, Canada
Duration: 3 Aug 20257 Aug 2025

Publication series

NameProceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
Volume2
ISSN (Print)2154-817X

Conference

Conference31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2025
Country/TerritoryCanada
CityToronto
Period3/08/257/08/25

Keywords

  • integrity verification of graph unlearning
  • machine unlearning

Fingerprint

Dive into the research topics of 'Verification of Incomplete Graph Unlearning through Adversarial Perturbations'. Together they form a unique fingerprint.

Cite this