TY - GEN
T1 - Co-Exploration of Graph Neural Network and Network-on-Chip Design Using AutoML
AU - Manu, Daniel
AU - Huang, Shaoyi
AU - Ding, Caiwen
AU - Yang, Lei
N1 - Publisher Copyright:
© 2021 ACM.
PY - 2021/6/22
Y1 - 2021/6/22
N2 - Recently, Graph Neural Networks (GNNs) have exhibited high efficiency in several graph-based machine learning tasks. Compared with the neural networks for computer vision or speech tasks (e.g., Convolutional Neural Networks), GNNs have much higher requirements on communication due to the complicated graph structures; however, when applying GNNs for real-world applications, say in recommender systems (e.g. Uber Eats), it commonly has the real-Time requirements. To deal with the tradeoff between the complicated architecture and the high-demand timing performance, both GNN architecture and hardware accelerator need to be optimized. Network-on-Chip (NoC), derived for efficiently managing the high-volume of communications, naturally becomes one of the top candidates to accelerate GNNs. However, there is a missing link between the optimize of GNN architecture and the NoC design. In this work, we present an AutoML-based framework GN-NAS, aiming at searching for the optimum GNN architecture, which can be suitable for the NoC accelerator. We devise a robust reinforcement learning based controller to validate the retained best GNN architectures, coupled with a parameter sharing approach, namely ParamShare, to improve search efficiency. Experimental results on four graph-based benchmark datasets, Cora, Citeseer, Pubmed and Protein-Protein Interaction show that the GNN architectures obtained by our framework outperform that of the state-of-The-Art and baseline models, whilst reducing model size which makes them easy to deploy onto the NoC platform.
AB - Recently, Graph Neural Networks (GNNs) have exhibited high efficiency in several graph-based machine learning tasks. Compared with the neural networks for computer vision or speech tasks (e.g., Convolutional Neural Networks), GNNs have much higher requirements on communication due to the complicated graph structures; however, when applying GNNs for real-world applications, say in recommender systems (e.g. Uber Eats), it commonly has the real-Time requirements. To deal with the tradeoff between the complicated architecture and the high-demand timing performance, both GNN architecture and hardware accelerator need to be optimized. Network-on-Chip (NoC), derived for efficiently managing the high-volume of communications, naturally becomes one of the top candidates to accelerate GNNs. However, there is a missing link between the optimize of GNN architecture and the NoC design. In this work, we present an AutoML-based framework GN-NAS, aiming at searching for the optimum GNN architecture, which can be suitable for the NoC accelerator. We devise a robust reinforcement learning based controller to validate the retained best GNN architectures, coupled with a parameter sharing approach, namely ParamShare, to improve search efficiency. Experimental results on four graph-based benchmark datasets, Cora, Citeseer, Pubmed and Protein-Protein Interaction show that the GNN architectures obtained by our framework outperform that of the state-of-The-Art and baseline models, whilst reducing model size which makes them easy to deploy onto the NoC platform.
KW - automl
KW - graph neural network
KW - network-on-chip
UR - http://www.scopus.com/inward/record.url?scp=85109209290&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85109209290&partnerID=8YFLogxK
U2 - 10.1145/3453688.3461741
DO - 10.1145/3453688.3461741
M3 - Conference contribution
AN - SCOPUS:85109209290
T3 - Proceedings of the ACM Great Lakes Symposium on VLSI, GLSVLSI
SP - 175
EP - 180
BT - GLSVLSI 2021 - Proceedings of the 2021 Great Lakes Symposium on VLSI
T2 - 31st Great Lakes Symposium on VLSI, GLSVLSI 2021
Y2 - 22 June 2021 through 25 June 2021
ER -