TY - GEN
T1 - Lightweight Huffman Coding for Efficient GPU Compression
AU - Shah, Milan
AU - Yu, Xiaodong
AU - Di, Sheng
AU - Becchi, Michela
AU - Cappello, Franck
N1 - Publisher Copyright:
© 2023 ACM.
PY - 2023/6/21
Y1 - 2023/6/21
N2 - Lossy compression is often deployed in scientific applications to reduce data footprint and improve data transfers and I/O performance. Especially for applications requiring on-the-flight compression, it is essential to minimize compression's runtime. In this paper, we design a scheme to improve the performance of cuSZ, a GPU-based lossy compressor. We observe that Huffman coding - used by cuSZ to compress metadata generated during compression - incurs a performance overhead that can be significant, especially for smaller datasets. Our work seeks to reduce the Huffman coding runtime with minimal-to-no impact on cuSZ's compression efficiency.Our contributions are as follows. First, we examine a variety of probability distributions to determine which distributions closely model the input to cuSZ's Huffman coding stage. From these distributions, we create a dictionary of pre-computed codebooks such that during compression, a codebook is selected from the dictionary instead of computing a custom codebook. Second, we explore three codebook selection criteria to be applied at runtime. Finally, we evaluate our scheme on real-world datasets and in the context of two important application use cases, HDF5 and MPI, using an NVIDIA A100 GPU. Our evaluation shows that our method can reduce the Huffman coding penalty by a factor of 78 - 92×, translating to a total speedup of up to 5× over baseline cuSZ. Smaller HDF5 chunk sizes enjoy over an 8× speedup in compression and MPI messages on the scale of tens of MB have a 1.4 - 30.5× speedup in communication time.
AB - Lossy compression is often deployed in scientific applications to reduce data footprint and improve data transfers and I/O performance. Especially for applications requiring on-the-flight compression, it is essential to minimize compression's runtime. In this paper, we design a scheme to improve the performance of cuSZ, a GPU-based lossy compressor. We observe that Huffman coding - used by cuSZ to compress metadata generated during compression - incurs a performance overhead that can be significant, especially for smaller datasets. Our work seeks to reduce the Huffman coding runtime with minimal-to-no impact on cuSZ's compression efficiency.Our contributions are as follows. First, we examine a variety of probability distributions to determine which distributions closely model the input to cuSZ's Huffman coding stage. From these distributions, we create a dictionary of pre-computed codebooks such that during compression, a codebook is selected from the dictionary instead of computing a custom codebook. Second, we explore three codebook selection criteria to be applied at runtime. Finally, we evaluate our scheme on real-world datasets and in the context of two important application use cases, HDF5 and MPI, using an NVIDIA A100 GPU. Our evaluation shows that our method can reduce the Huffman coding penalty by a factor of 78 - 92×, translating to a total speedup of up to 5× over baseline cuSZ. Smaller HDF5 chunk sizes enjoy over an 8× speedup in compression and MPI messages on the scale of tens of MB have a 1.4 - 30.5× speedup in communication time.
KW - GPU
KW - Huffman coding
KW - compression
UR - http://www.scopus.com/inward/record.url?scp=85168423349&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85168423349&partnerID=8YFLogxK
U2 - 10.1145/3577193.3593736
DO - 10.1145/3577193.3593736
M3 - Conference contribution
AN - SCOPUS:85168423349
T3 - Proceedings of the International Conference on Supercomputing
SP - 99
EP - 110
BT - ACM ICS 2023 - Proceedings of the International Conference on Supercomputing
T2 - 37th ACM International Conference on Supercomputing, ICS 2023
Y2 - 21 June 2023 through 23 June 2023
ER -