Data compression for the exascale computing era - survey

Seung Woo Son, Zhengzhang Chen, William Hendrix, Ankit Agrawal, Wei Keng Liao, Alok Choudhary

Research output: Contribution to journalArticlepeer-review

73 Scopus citations

Abstract

While periodic checkpointing has been an important mechanism for tolerating faults in highperformance computing (HPC) systems, it is cost-prohibitive as the HPC system approaches exascale. Applying compression techniques is one common way to mitigate such burdens by reducing the data size, but they are often found to be less effective for scientific datasets. Traditional lossless compression techniques that look for repeated patterns are ineffective for scientific data in which high-precision data is used and hence common patterns are rare to find. In this paper, we present a comparison of several lossless and lossy data compression algorithms and discuss their methodology under the exascale environment. As data volume increases, we discover an increasing trend of new domain-driven algorithms that exploit the inherent characteristics exhibited in many scientific dataset, such as relatively small changes in data values from one simulation iteration to the next or among neighboring data. In particular, significant data reduction has been observed in lossy compression. This paper also discusses how the errors introduced by lossy compressions are controlled and the tradeoffs with the compression ratio.

Original languageEnglish
Pages (from-to)76-88
Number of pages13
JournalSupercomputing Frontiers and Innovations
Volume1
Issue number2
DOIs
StatePublished - 2014

Keywords

  • Checkpoint/restart
  • Data clustering
  • Error bound
  • Fault tolerance
  • Lossless/lossy compression

Fingerprint

Dive into the research topics of 'Data compression for the exascale computing era - survey'. Together they form a unique fingerprint.

Cite this