Compression Error Sensitivity Analysis for Different Experts in MoE Model Inference

  • Songkai Ma
  • , Zhaorui Zhang
  • , Sheng Di
  • , Benben Liu
  • , Xiaodong Yu
  • , Xiaoyi Lu
  • , Dan Wang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

With the widespread application of Mixture of Experts (MoE) reasoning models in the field of LLM learning, efficiently serving MoE models under limited GPU memory constraints has emerged as a significant challenge. Offloading the non-activated experts to main memory has been identified as an efficient approach to address such a problem, while it brings the challenges of transferring the expert between the GPU memory and main memory. We need to explore an efficient approach to compress the expert and analyze how the compression error affects the inference performance. To bridge this gap, we propose employing error-bounded lossy compression algorithms (such as SZ3 and CuSZp) to compress non-activated experts, thereby reducing data transfer overhead during MoE inference. We conduct extensive experiments across various benchmarks and present a comprehensive analysis of how compression-induced errors in different experts affect overall inference accuracy. The results indicate that experts in the shallow layers exhibit minimal degradation in inference accuracy when subjected to bounded errors. In contrast, errors in the middle-layer experts, which are central to model reasoning, significantly impair inference accuracy. Interestingly, introducing bounded errors in the deep-layer experts, which are mainly responsible for instruction following and output integration, can sometimes lead to improvements in inference accuracy.

Original languageEnglish
Title of host publicationProceedings of 2025 Workshops of the International Conference on High Performance Computing, Network, Storage, and Analysis, SC 2025 Workshops
Pages339-348
Number of pages10
ISBN (Electronic)9798400718717
DOIs
StatePublished - 15 Nov 2025
Event2025 Workshops of the International Conference on High Performance Computing, Network, Storage, and Analysis, SC 2025 Workshops - St. Louis, United States
Duration: 16 Nov 202521 Nov 2025

Publication series

NameProceedings of 2025 Workshops of the International Conference on High Performance Computing, Network, Storage, and Analysis, SC 2025 Workshops

Conference

Conference2025 Workshops of the International Conference on High Performance Computing, Network, Storage, and Analysis, SC 2025 Workshops
Country/TerritoryUnited States
CitySt. Louis
Period16/11/2521/11/25

Keywords

  • Error Sensitivity
  • Inference
  • Mixture of Experts
  • Model Compression

Fingerprint

Dive into the research topics of 'Compression Error Sensitivity Analysis for Different Experts in MoE Model Inference'. Together they form a unique fingerprint.

Cite this