TY - GEN
T1 - Verification of Incomplete Graph Unlearning through Adversarial Perturbations
AU - Wu, Kun
AU - Wang, Wendy Hui
N1 - Publisher Copyright:
© 2025 ACM.
PY - 2025/8/3
Y1 - 2025/8/3
N2 - Graph unlearning (GU) enables data owners to remove specific data from a trained Graph Neural Network (GNN). However, a dishonest model provider may cheat on the unlearning process. This paper focuses on a specific type of cheating behavior in GU, namely incomplete edge unlearning, where the model provider removes only a subset of the requested edges from the trained GNN. We introduce PANDA, the first probabilistic GU verification framework, to detect such cheating behaviors. PANDA identifies a set of nodes, called token nodes, in the graph, and injects a set of fake edges, referred to as challenge edges, into the model to manipulate the predictions of token nodes. A key property of the challenge edges is that removing only a subset of them from the poisoned model does not change the token nodes' prediction. Then by requesting the removal of the challenge edges and observing the change in the token nodes' predictions, the verifier can assess the likelihood that the model provider has engaged in incomplete unlearning. To develop PANDA, we design a novel algorithm to identify the token nodes and generate their associated challenge edges. We rigorously quantify the verification probabilities achieved by PANDA. Our extensive empirical studies demonstrate the efficiency and effectiveness of PANDA in detecting incomplete edge unlearning across a variety of GNN models and unlearning algorithms. Furthermore, we show that PANDA exhibits strong robustness against state-of-the-art detection methods for graph adversarial perturbations. Our code and datasets are available at https://github.com/kunwu522/unlearning-verification-gnn.
AB - Graph unlearning (GU) enables data owners to remove specific data from a trained Graph Neural Network (GNN). However, a dishonest model provider may cheat on the unlearning process. This paper focuses on a specific type of cheating behavior in GU, namely incomplete edge unlearning, where the model provider removes only a subset of the requested edges from the trained GNN. We introduce PANDA, the first probabilistic GU verification framework, to detect such cheating behaviors. PANDA identifies a set of nodes, called token nodes, in the graph, and injects a set of fake edges, referred to as challenge edges, into the model to manipulate the predictions of token nodes. A key property of the challenge edges is that removing only a subset of them from the poisoned model does not change the token nodes' prediction. Then by requesting the removal of the challenge edges and observing the change in the token nodes' predictions, the verifier can assess the likelihood that the model provider has engaged in incomplete unlearning. To develop PANDA, we design a novel algorithm to identify the token nodes and generate their associated challenge edges. We rigorously quantify the verification probabilities achieved by PANDA. Our extensive empirical studies demonstrate the efficiency and effectiveness of PANDA in detecting incomplete edge unlearning across a variety of GNN models and unlearning algorithms. Furthermore, we show that PANDA exhibits strong robustness against state-of-the-art detection methods for graph adversarial perturbations. Our code and datasets are available at https://github.com/kunwu522/unlearning-verification-gnn.
KW - integrity verification of graph unlearning
KW - machine unlearning
UR - https://www.scopus.com/pages/publications/105014320617
UR - https://www.scopus.com/pages/publications/105014320617#tab=citedBy
U2 - 10.1145/3711896.3737179
DO - 10.1145/3711896.3737179
M3 - Conference contribution
AN - SCOPUS:105014320617
T3 - Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
SP - 3204
EP - 3215
BT - KDD 2025 - Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining
T2 - 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2025
Y2 - 3 August 2025 through 7 August 2025
ER -