GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph Neural Networks

Chenhui Deng, Xiuyu Li, Zhuo Feng, Zhiru Zhang

Research output: Contribution to journalConference articlepeer-review

14 Scopus citations

Abstract

Graph neural networks (GNNs) have been increasingly deployed in various applications that involve learning on non-Euclidean data. However, recent studies show that GNNs are vulnerable to graph adversarial attacks. Although there are several defense methods to improve GNN robustness by eliminating adversarial components, they may also impair the underlying clean graph structure that contributes to GNN training. In addition, few of those defense models can scale to large graphs due to their high computational complexity and memory usage. In this paper, we propose GARNET1, a scalable spectral method to boost the adversarial robustness of GNN models. GARNET first leverages weighted spectral embedding to construct a base graph, which is not only resistant to adversarial attacks but also contains critical (clean) graph structure for GNN training. Next, GARNET further refines the base graph by pruning additional uncritical edges based on probabilistic graphical model. GARNET has been evaluated on various datasets, including a large graph with millions of nodes. Our extensive experiment results show that GARNET achieves adversarial accuracy improvement and runtime speedup over state-of-the-art GNN (defense) models by up to 10.23% and 14.7×, respectively.

Original languageEnglish
JournalProceedings of Machine Learning Research
Volume198
StatePublished - 2022
Event1st Learning on Graphs Conference, LOG 2022 - Virtual, Online
Duration: 9 Dec 202212 Dec 2022

Fingerprint

Dive into the research topics of 'GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph Neural Networks'. Together they form a unique fingerprint.

Cite this