TY - GEN
T1 - Localized Motion Artifact Reduction on Brain MRI Using Deep Learning with Effective Data Augmentation Techniques
AU - Zhao, Yijun
AU - Ossowski, Jacek
AU - Wang, Xuming
AU - Li, Shangjin
AU - Devinsky, Orrin
AU - Martin, Samantha P.
AU - Pardoe, Heath R.
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/7/18
Y1 - 2021/7/18
N2 - In-scanner motion degrades the quality of magnetic resonance imaging (MRI) thereby reducing its utility in the detection of clinically relevant abnormalities. We collaborate with doctors from NYU Langone's Comprehensive Epilepsy Center and apply a deep learning-based MRI artifact reduction model (DMAR) to correct head motion artifacts in brain MRI scans. Specifically, DMAR employs a two-stage approach: in the first, degraded regions are detected using the Single Shot Multibox Detector (SSD), and in the second, the artifacts within the found regions are reduced using a convolutional autoencoder (CAE). We further introduce a set of novel data augmentation techniques to address the high dimensionality of MRI images and the scarcity of available data. As a result, our model was trained on a large synthetic dataset of 225, 000 images generated using 375 whole brain T1-weighted MRI scans from the OASIS-1 dataset. DMAR visibly reduces image artifacts when validated using real-world artifact-affected scans from the multi-center ABIDE study and proprietary data collected at NYU. Quantitatively, depending on the level of degradation, our model achieves a 27.8%-48.1% reduction in RMSE and a 2.88-5.79 dB gain in PSNR on a 5000-sample set of synthetic images. For real-world data without ground-truth, our model reduced the variance of image voxel intensity within artifact-affected brain regions (mathrmp=0.014).
AB - In-scanner motion degrades the quality of magnetic resonance imaging (MRI) thereby reducing its utility in the detection of clinically relevant abnormalities. We collaborate with doctors from NYU Langone's Comprehensive Epilepsy Center and apply a deep learning-based MRI artifact reduction model (DMAR) to correct head motion artifacts in brain MRI scans. Specifically, DMAR employs a two-stage approach: in the first, degraded regions are detected using the Single Shot Multibox Detector (SSD), and in the second, the artifacts within the found regions are reduced using a convolutional autoencoder (CAE). We further introduce a set of novel data augmentation techniques to address the high dimensionality of MRI images and the scarcity of available data. As a result, our model was trained on a large synthetic dataset of 225, 000 images generated using 375 whole brain T1-weighted MRI scans from the OASIS-1 dataset. DMAR visibly reduces image artifacts when validated using real-world artifact-affected scans from the multi-center ABIDE study and proprietary data collected at NYU. Quantitatively, depending on the level of degradation, our model achieves a 27.8%-48.1% reduction in RMSE and a 2.88-5.79 dB gain in PSNR on a 5000-sample set of synthetic images. For real-world data without ground-truth, our model reduced the variance of image voxel intensity within artifact-affected brain regions (mathrmp=0.014).
KW - MRI
KW - deep learning
KW - k-space
KW - motion artifact reduction
KW - object detection
UR - http://www.scopus.com/inward/record.url?scp=85116439408&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85116439408&partnerID=8YFLogxK
U2 - 10.1109/IJCNN52387.2021.9534191
DO - 10.1109/IJCNN52387.2021.9534191
M3 - Conference contribution
AN - SCOPUS:85116439408
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - IJCNN 2021 - International Joint Conference on Neural Networks, Proceedings
T2 - 2021 International Joint Conference on Neural Networks, IJCNN 2021
Y2 - 18 July 2021 through 22 July 2021
ER -