TY - JOUR
T1 - CUDAMPF++
T2 - A Proactive Resource Exhaustion Scheme for Accelerating Homologous Sequence Search on CUDA-Enabled GPU
AU - Jiang, Hanyu
AU - Ganesan, Narayan
AU - Yao, Yu Dong
N1 - Publisher Copyright:
© 1990-2012 IEEE.
PY - 2018/10/1
Y1 - 2018/10/1
N2 - Biological sequence alignment is an important research topic in bioinformatics and continues to attract significant efforts. As biological data grow exponentially, however, most of alignment methods face challenges due to their huge computational costs. HMMER, a suite of bioinformatics tools, is widely used for the analysis of homologous protein and nucleotide sequences with high sensitivity, based on profile hidden Markov models (HMMs). Its latest version, HMMER3, introduces a heuristic pipeline to accelerate the alignment process, which is carried out on central processing units (CPUs) and highly optimized. Only a few acceleration results are reported on the basis of HMMER3. In this paper, we propose a five-tiered parallel framework, CUDAMPF++, to accelerate the most computationally intensive stages in HMMER3's pipeline, multiple/single segment Viterbi (MSV/SSV), on a single graphics processing unit (GPU) without any loss of accuracy. As an architecture-aware design, the proposed framework aims to fully utilize hardware resources via exploiting finer-grained parallelism (multi-sequence alignment) compared with its predecessor (CUDAMPF). In addition, we propose a novel method that proactively sacrifices L1 Cache Hit Ratio (CHR) to get improved performance and scalability in return. A comprehensive evaluation shows that the proposed framework outperforms all existing work and exhibits good consistency in performance regardless of the variation of query models or sequence datasets. For MSV (SSV) kernels, the peak performance of CUDAMPF++ is 283.9 (471.7) GCUPS on a single K40 GPU, and impressive speedups ranging from 1.8x (1.7x) to 168.3x (160.7x) are achieved over the CPU-based implementation (16 cores, 32 threads).
AB - Biological sequence alignment is an important research topic in bioinformatics and continues to attract significant efforts. As biological data grow exponentially, however, most of alignment methods face challenges due to their huge computational costs. HMMER, a suite of bioinformatics tools, is widely used for the analysis of homologous protein and nucleotide sequences with high sensitivity, based on profile hidden Markov models (HMMs). Its latest version, HMMER3, introduces a heuristic pipeline to accelerate the alignment process, which is carried out on central processing units (CPUs) and highly optimized. Only a few acceleration results are reported on the basis of HMMER3. In this paper, we propose a five-tiered parallel framework, CUDAMPF++, to accelerate the most computationally intensive stages in HMMER3's pipeline, multiple/single segment Viterbi (MSV/SSV), on a single graphics processing unit (GPU) without any loss of accuracy. As an architecture-aware design, the proposed framework aims to fully utilize hardware resources via exploiting finer-grained parallelism (multi-sequence alignment) compared with its predecessor (CUDAMPF). In addition, we propose a novel method that proactively sacrifices L1 Cache Hit Ratio (CHR) to get improved performance and scalability in return. A comprehensive evaluation shows that the proposed framework outperforms all existing work and exhibits good consistency in performance regardless of the variation of query models or sequence datasets. For MSV (SSV) kernels, the peak performance of CUDAMPF++ is 283.9 (471.7) GCUPS on a single K40 GPU, and impressive speedups ranging from 1.8x (1.7x) to 168.3x (160.7x) are achieved over the CPU-based implementation (16 cores, 32 threads).
KW - CUDA
KW - GPU
KW - HMMER
KW - L1 cache
KW - MSV
KW - SIMD
KW - SSV
KW - hidden Markov model
KW - viterbi algorithm
UR - http://www.scopus.com/inward/record.url?scp=85046355441&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85046355441&partnerID=8YFLogxK
U2 - 10.1109/TPDS.2018.2830393
DO - 10.1109/TPDS.2018.2830393
M3 - Article
AN - SCOPUS:85046355441
SN - 1045-9219
VL - 29
SP - 2206
EP - 2222
JO - IEEE Transactions on Parallel and Distributed Systems
JF - IEEE Transactions on Parallel and Distributed Systems
IS - 10
M1 - 8350332
ER -