ISGAN: Unsupervised Domain Adaptation with Improved Symmetric GAN for Cross-Modality Multi-organ Segmentation

Jiapeng Li, Yifan Zhang, Lisheng Xu, Yudong Yao, Lin Qi

Research output: Contribution to journalArticlepeer-review

Abstract

The differences between cross-modality medical images are significant, so several studies are working on unsupervised domain adaptation (UDA) segmentation, which aims to adapt a segmentation model trained on a labeled source domain to an unlabeled target domain. The conventional UDA segmentation strategy aims to integrate image generation and segmentation. However, conventional image generation modules only consider information from a single domain (source or target), resulting in visual inconsistencies. The image generation module may also lack anatomical constraints, leading to incorrect pseudo-label generation. To address these issues, we propose an improved symmetric generative adversarial network (ISGAN). Unlike conventional approaches that perform domain adaptation only in the source or target domain, ISGAN adopts a symmetric architecture using two-path domain adaptation to reduce the visual difference. In addition, ISGAN adopts a bidirectional training strategy to optimize the image generation and segmentation modules. The bidirectional training strategy introduces the anatomical constraints into the image generation module, thereby reducing the generation of incorrect pseudo labels. Finally, we validate ISGAN on two cross-modality datasets (the MMWHS cardiac dataset and Abdomen dataset). ISGAN delivers promising segmentation and generalization performance compared with state-of-the-art UDA methods.

Original languageEnglish
JournalIEEE Journal of Biomedical and Health Informatics
DOIs
StateAccepted/In press - 2024

Keywords

  • bidirectional training
  • cross-modality segmentation
  • generative adversarial networks
  • symmetric architecture
  • unsupervised domain adaptation

Fingerprint

Dive into the research topics of 'ISGAN: Unsupervised Domain Adaptation with Improved Symmetric GAN for Cross-Modality Multi-organ Segmentation'. Together they form a unique fingerprint.

Cite this