TY - GEN
T1 - A length adaptive algorithm-hardware co-design of transformer on FPGA through sparse attention and dynamic pipelining
AU - Peng, Hongwu
AU - Huang, Shaoyi
AU - Chen, Shiyang
AU - Li, Bingbing
AU - Geng, Tong
AU - Li, Ang
AU - Jiang, Weiwen
AU - Wen, Wujie
AU - Bi, Jinbo
AU - Liu, Hang
AU - Ding, Caiwen
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/7/10
Y1 - 2022/7/10
N2 - Transformers are considered one of the most important deep learning models since 2018, in part because it establishes state-of-the-art (SOTA) records and could potentially replace existing Deep Neural Networks (DNNs). Despite the remarkable triumphs, the prolonged turnaround time of Transformer models is a widely recognized roadblock. The variety of sequence lengths imposes additional computing overhead where inputs need to be zero-padded to the maximum sentence length in the batch to accommodate the parallel computing platforms. This paper targets the field-programmable gate array (FPGA) and proposes a coherent sequence length adaptive algorithm-hardware co-design for Transformer acceleration. Particularly, we develop a hardware-friendly sparse attention operator and a length-aware hardware resource scheduling algorithm. The proposed sparse attention operator brings the complexity of attention-based models down to linear complexity and alleviates the off-chip memory traffic. The proposed length-aware resource hardware scheduling algorithm dynamically allocates the hardware resources to fill up the pipeline slots and eliminates bubbles for NLP tasks. Experiments show that our design has very small accuracy loss and has 80.2 × and 2.6 × speedup compared to CPU and GPU implementation, and 4 × higher energy efficiency than state-of-the-art GPU accelerator optimized via CUBLAS GEMM.
AB - Transformers are considered one of the most important deep learning models since 2018, in part because it establishes state-of-the-art (SOTA) records and could potentially replace existing Deep Neural Networks (DNNs). Despite the remarkable triumphs, the prolonged turnaround time of Transformer models is a widely recognized roadblock. The variety of sequence lengths imposes additional computing overhead where inputs need to be zero-padded to the maximum sentence length in the batch to accommodate the parallel computing platforms. This paper targets the field-programmable gate array (FPGA) and proposes a coherent sequence length adaptive algorithm-hardware co-design for Transformer acceleration. Particularly, we develop a hardware-friendly sparse attention operator and a length-aware hardware resource scheduling algorithm. The proposed sparse attention operator brings the complexity of attention-based models down to linear complexity and alleviates the off-chip memory traffic. The proposed length-aware resource hardware scheduling algorithm dynamically allocates the hardware resources to fill up the pipeline slots and eliminates bubbles for NLP tasks. Experiments show that our design has very small accuracy loss and has 80.2 × and 2.6 × speedup compared to CPU and GPU implementation, and 4 × higher energy efficiency than state-of-the-art GPU accelerator optimized via CUBLAS GEMM.
KW - attention
KW - BERT
KW - FPGA
KW - length adaptive
KW - transformer
UR - http://www.scopus.com/inward/record.url?scp=85137420072&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85137420072&partnerID=8YFLogxK
U2 - 10.1145/3489517.3530585
DO - 10.1145/3489517.3530585
M3 - Conference contribution
AN - SCOPUS:85137420072
T3 - Proceedings - Design Automation Conference
SP - 1135
EP - 1140
BT - Proceedings of the 59th ACM/IEEE Design Automation Conference, DAC 2022
T2 - 59th ACM/IEEE Design Automation Conference, DAC 2022
Y2 - 10 July 2022 through 14 July 2022
ER -