TY - JOUR
T1 - Graph Self-supervised Learning via Proximity Divergence Minimization
AU - Zhang, Tianyi
AU - Dai, Zhenwei
AU - Xu, Zhaozhuo
AU - Shrivastava, Anshumali
N1 - Publisher Copyright:
© UAI 2023. All rights reserved.
PY - 2023
Y1 - 2023
N2 - Self-supervised learning (SSL) for graphs is an essential problem since graph data are ubiquitous and labeling can be costly. We argue that existing SSL approaches for graphs have two limitations. First, they rely on corruption techniques such as node attribute perturbation and edge dropping to generate graph views for contrastive learning. These unnatural corruption techniques require extensive tuning efforts and provide marginal improvements. Second, the current approaches require the computation of multiple graph views, which is memory and computationally inefficient. These shortcomings of graph SSL call for a corruption-free single-view learning approach, but the strawman approach of using neighboring nodes as positive examples suffers two problems: it ignores the strength of connections between nodes implied by the graph structure on a macro level, and cannot deal with the high noise in real-world graphs. We propose Proximity Divergence Minimization (PDM), a corruption-free single-view graph SSL approach that overcomes these problems by leveraging node proximity to measure connection strength and denoise the graph structure. Through extensive experiments, we show that PDM achieves up to 4.55% absolute improvement in ROC-AUC on graph SSL tasks over state-of-the-art approaches while being more memory efficient. Moreover, PDM even outperforms supervised training on node classification tasks of ogbn-proteins dataset. Our code is publicly availableaahttps://github.com/tonyzhang617/pdm.
AB - Self-supervised learning (SSL) for graphs is an essential problem since graph data are ubiquitous and labeling can be costly. We argue that existing SSL approaches for graphs have two limitations. First, they rely on corruption techniques such as node attribute perturbation and edge dropping to generate graph views for contrastive learning. These unnatural corruption techniques require extensive tuning efforts and provide marginal improvements. Second, the current approaches require the computation of multiple graph views, which is memory and computationally inefficient. These shortcomings of graph SSL call for a corruption-free single-view learning approach, but the strawman approach of using neighboring nodes as positive examples suffers two problems: it ignores the strength of connections between nodes implied by the graph structure on a macro level, and cannot deal with the high noise in real-world graphs. We propose Proximity Divergence Minimization (PDM), a corruption-free single-view graph SSL approach that overcomes these problems by leveraging node proximity to measure connection strength and denoise the graph structure. Through extensive experiments, we show that PDM achieves up to 4.55% absolute improvement in ROC-AUC on graph SSL tasks over state-of-the-art approaches while being more memory efficient. Moreover, PDM even outperforms supervised training on node classification tasks of ogbn-proteins dataset. Our code is publicly availableaahttps://github.com/tonyzhang617/pdm.
UR - http://www.scopus.com/inward/record.url?scp=85170093429&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85170093429&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85170093429
VL - 216
SP - 2498
EP - 2508
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 39th Conference on Uncertainty in Artificial Intelligence, UAI 2023
Y2 - 31 July 2023 through 4 August 2023
ER -