TY - JOUR
T1 - Quasi-supervised MR-CT image conversion based on unpaired data
AU - Zhu, Ruiming
AU - Ruan, Yuhui
AU - Li, Mingrui
AU - Qian, Wei
AU - Yao, Yudong
AU - Teng, Yueyang
N1 - Publisher Copyright:
© 2025 Institute of Physics and Engineering in Medicine. All rights, including for text and data mining, AI training, and similar technologies, are reserved.
PY - 2025/6/22
Y1 - 2025/6/22
N2 - Objective. In radiotherapy planning, acquiring both magnetic resonance (MR) and computed tomography (CT) images is crucial for comprehensive evaluation and treatment. However, simultaneous acquisition of MR and CT images is time-consuming, economically expensive, and involves ionizing radiation, which poses health risks to patients. The objective of this study is to generate CT images from radiation-free MR images using a novel quasi-supervised learning framework. Approach. In this work, we propose a quasi-supervised framework to explore the underlying relationship between unpaired MR and CT images. Normalized mutual information (NMI) is employed as a similarity metric to evaluate the correspondence between MR and CT scans. To establish optimal pairings, we compute an NMI matrix across the training set and apply the Hungarian algorithm for global matching. The resulting MR-CT pairs, along with their NMI scores, are treated as prior knowledge and integrated into the training process to guide the MR-to-CT image translation model. Main results. Experimental results indicate that the proposed method significantly outperforms existing unsupervised image synthesis methods in terms of both image quality and consistency of image features during the MR to CT image conversion process. The generated CT images show a higher degree of accuracy and fidelity to the original MR images, ensuring better preservation of anatomical details and structural integrity. Significance. This study proposes a quasi-supervised framework that converts unpaired MR and CT images into structurally consistent pseudo-pairs, providing informative priors to enhance cross-modality image synthesis. This strategy not only improves the accuracy and reliability of MR-CT conversion, but also reduces reliance on costly and scarce paired datasets. The proposed framework offers a practical and scalable solution for real-world medical imaging applications, where paired annotations are often unavailable.
AB - Objective. In radiotherapy planning, acquiring both magnetic resonance (MR) and computed tomography (CT) images is crucial for comprehensive evaluation and treatment. However, simultaneous acquisition of MR and CT images is time-consuming, economically expensive, and involves ionizing radiation, which poses health risks to patients. The objective of this study is to generate CT images from radiation-free MR images using a novel quasi-supervised learning framework. Approach. In this work, we propose a quasi-supervised framework to explore the underlying relationship between unpaired MR and CT images. Normalized mutual information (NMI) is employed as a similarity metric to evaluate the correspondence between MR and CT scans. To establish optimal pairings, we compute an NMI matrix across the training set and apply the Hungarian algorithm for global matching. The resulting MR-CT pairs, along with their NMI scores, are treated as prior knowledge and integrated into the training process to guide the MR-to-CT image translation model. Main results. Experimental results indicate that the proposed method significantly outperforms existing unsupervised image synthesis methods in terms of both image quality and consistency of image features during the MR to CT image conversion process. The generated CT images show a higher degree of accuracy and fidelity to the original MR images, ensuring better preservation of anatomical details and structural integrity. Significance. This study proposes a quasi-supervised framework that converts unpaired MR and CT images into structurally consistent pseudo-pairs, providing informative priors to enhance cross-modality image synthesis. This strategy not only improves the accuracy and reliability of MR-CT conversion, but also reduces reliance on costly and scarce paired datasets. The proposed framework offers a practical and scalable solution for real-world medical imaging applications, where paired annotations are often unavailable.
KW - deep learning
KW - MR-CT image conversion
KW - quasi-supervised learning
KW - unpaired data
UR - https://www.scopus.com/pages/publications/105009123555
UR - https://www.scopus.com/pages/publications/105009123555#tab=citedBy
U2 - 10.1088/1361-6560/ade220
DO - 10.1088/1361-6560/ade220
M3 - Article
C2 - 40480258
AN - SCOPUS:105009123555
SN - 0031-9155
VL - 70
JO - Physics in Medicine and Biology
JF - Physics in Medicine and Biology
IS - 12
M1 - 125010
ER -