TSP-UDANet: two-stage progressive unsupervised domain adaptation network for automated cross-modality cardiac segmentation

Yonghui Wang, Yifan Zhang, Lisheng Xu, Shouliang Qi, Yudong Yao, Wei Qian, Stephen E. Greenwald, Lin Qi

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Accurate segmentation of cardiac anatomy is a prerequisite for the diagnosis of cardiovascular disease. However, due to differences in imaging modalities and imaging devices, known as domain shift, the segmentation performance of deep learning models lacks reliability. In this paper, we propose a two-stage progressive unsupervised domain adaptation network (TSP-UDANet) to reduce domain shift when segmenting cardiac images from various sources. We alleviate the domain shift between the feature distribution of the source and target domains by introducing an intermediate domain as a bridge. The TSP-UDANet consists of three sub-networks: a style transfer sub-network, a segmentation sub-network, and a self-training sub-network. We conduct cooperative alignment of different domains at image level, feature level, and output level. Specifically, we transform the appearance of images across domains and enhance domain invariance by adversarial learning in multiple aspects to achieve unsupervised segmentation of the target modality. We validate the TSP-UDANet on the MMWHS (unpaired MRI and CT images), MS-CMRSeg (cross-modality MRI images), and M&Ms (cross-vendor MRI images) datasets. The experimental results demonstrate excellent segmentation performance and generalizability for unlabeled target modality images.

Original languageEnglish
Pages (from-to)22189-22207
Number of pages19
JournalNeural Computing and Applications
Volume35
Issue number30
DOIs
StatePublished - Oct 2023

Keywords

  • Cardiac segmentation
  • Cross-modality learning
  • Intermediate domain
  • Unsupervised domain adaptation

Fingerprint

Dive into the research topics of 'TSP-UDANet: two-stage progressive unsupervised domain adaptation network for automated cross-modality cardiac segmentation'. Together they form a unique fingerprint.

Cite this