TY - GEN
T1 - ZEN
T2 - 19th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2025
AU - Wang, Zhuang
AU - Xu, Zhaozhuo
AU - Xi, Jingyi
AU - Wang, Yuke
AU - Shrivastava, Anshumali
AU - Eugene Ng, T. S.
N1 - Publisher Copyright:
© 2025 by The USENIX Association. All rights reserved.
PY - 2025
Y1 - 2025
N2 - Distributed training is the de facto standard to scale up the training of deep learning models with multiple GPUs. Its performance bottleneck lies in communications for gradient synchronization. Although high tensor sparsity is widely observed, the optimal communication scheme to fully leverage sparsity is still missing. This paper aims to bridge this gap. We first analyze the characteristics of sparse tensors in popular models to understand the fundamentals of sparsity. We then systematically explore the design space of communication schemes for sparse tensors and find the optimal ones. These findings give a new understanding and inspire us to develop a holistic gradient synchronization system for sparse tensors called ZEN. We demonstrate that ZEN can achieve up to 5.09× speedup in communication time and up to 2.48× speedup in training throughput compared to the state-of-the-art methods.
AB - Distributed training is the de facto standard to scale up the training of deep learning models with multiple GPUs. Its performance bottleneck lies in communications for gradient synchronization. Although high tensor sparsity is widely observed, the optimal communication scheme to fully leverage sparsity is still missing. This paper aims to bridge this gap. We first analyze the characteristics of sparse tensors in popular models to understand the fundamentals of sparsity. We then systematically explore the design space of communication schemes for sparse tensors and find the optimal ones. These findings give a new understanding and inspire us to develop a holistic gradient synchronization system for sparse tensors called ZEN. We demonstrate that ZEN can achieve up to 5.09× speedup in communication time and up to 2.48× speedup in training throughput compared to the state-of-the-art methods.
UR - https://www.scopus.com/pages/publications/105011596662
UR - https://www.scopus.com/pages/publications/105011596662#tab=citedBy
M3 - Conference contribution
AN - SCOPUS:105011596662
T3 - Proceedings of the 19th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2025
SP - 537
EP - 556
BT - Proceedings of the 19th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2025
Y2 - 7 July 2025 through 9 July 2025
ER -