DR-CircuitGNN: Training Acceleration of Heterogeneous Circuit Graph Neural Network on GPUs

  • Yuebo Luo
  • , Shiyang Li
  • , Junran Tao
  • , Kiran Gautam Thorat
  • , Xi Xie
  • , Hongwu Peng
  • , Nuo Xu
  • , Caiwen Ding
  • , Shaoyi Huang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The increasing scale and complexity of integrated circuit design have led to increased challenges in Electronic Design Automation (EDA). Graph Neural Networks (GNNs), have emerged as a promising approach to assist EDA design as circuits can be naturally represented as graph. While GNNs offer a foundation for circuit analysis, they often fail to capture the full complexity of EDA designs. Heterogeneous Graph Neural Networks (HGNNs) can better interpret EDA circuit graphs as they capture both topological relationships and geometric features. However, the improved representation capability comes at the cost of even higher computational complexity and processing cost due to their serial module-wise message-passing scheme, creating a significant performance bottleneck. In this paper, we propose DR-CircuitGNN, a fast GPU kernel design by leveraging row-wise sparsity-aware Dynamic-ReLU and optimizing SpMM kernels during heterogeneous message-passing to accelerate HGNNs training on EDA-related circuit graph datasets. To further enhance performance, we propose a parallel optimization strategy that maximizes CPU-GPU concurrency by concurrently processing independent subgraphs using multi-threaded CPU initialization and GPU kernel execution via multiple cudaStreams. Our experiments show that on three representative CircuitNet designs (small, medium, large), the proposed method can achieve up to 3.51 × and 4.09 × speedup compared to the SOTA for forward and backward propagation, respectively. On full-size CircuitNet and sampled Mini-CircuitNet, our parallel design enables up to 2.71 × speed up over the official DGL implementation cuSPARSE with negligible impact on correlation scores and error rates.

Original languageEnglish
Title of host publicationACM ICS 2025 - Proceedings of the 39th ACM International Conference on Supercomputing
Pages221-235
Number of pages15
ISBN (Electronic)9798400715372
DOIs
StatePublished - 22 Aug 2025
Event39th ACM International Conference on Supercomputing, ICS 2025 - Lake City, United States
Duration: 8 Jun 202511 Jun 2025

Publication series

NameProceedings of the International Conference on Supercomputing
VolumePart of 213821

Conference

Conference39th ACM International Conference on Supercomputing, ICS 2025
Country/TerritoryUnited States
CityLake City
Period8/06/2511/06/25

Keywords

  • congestion prediction
  • Electronic Design Automation
  • Heterogeneous Graph Neural Network
  • Sparse Matrix Multiplication kernels

Fingerprint

Dive into the research topics of 'DR-CircuitGNN: Training Acceleration of Heterogeneous Circuit Graph Neural Network on GPUs'. Together they form a unique fingerprint.

Cite this