TY - JOUR
T1 - Fully Automated Postlumpectomy Breast Margin Assessment Utilizing Convolutional Neural Network Based Optical Coherence Tomography Image Classification Method
AU - Mojahed, Diana
AU - Ha, Richard S.
AU - Chang, Peter
AU - Gan, Yu
AU - Yao, Xinwen
AU - Angelini, Brigid
AU - Hibshoosh, Hanina
AU - Taback, Bret
AU - Hendon, Christine P.
N1 - Publisher Copyright:
© 2019 The Association of University Radiologists
PY - 2020/5
Y1 - 2020/5
N2 - Background: The purpose of this study was to develop a deep learning classification approach to distinguish cancerous from noncancerous regions within optical coherence tomography (OCT) images of breast tissue for potential use in an intraoperative setting for margin assessment. Methods: A custom ultrahigh-resolution OCT (UHR-OCT) system with an axial resolution of 2.7 μm and a lateral resolution of 5.5 μm was used in this study. The algorithm used an A-scan-based classification scheme and the convolutional neural network (CNN) was implemented using an 11-layer architecture consisting of serial 3 × 3 convolution kernels. Four tissue types were classified, including adipose, stroma, ductal carcinoma in situ, and invasive ductal carcinoma. Results: The binary classification of cancer versus noncancer with the proposed CNN achieved 94% accuracy, 96% sensitivity, and 92% specificity. The mean five-fold validation F1 score was highest for invasive ductal carcinoma (mean standard deviation, 0.89 ± 0.09) and adipose (0.79 ± 0.17), followed by stroma (0.74 ± 0.18), and ductal carcinoma in situ (0.65 ± 0.15). Conclusion: It is feasible to use CNN based algorithm to accurately distinguish cancerous regions in OCT images. This fully automated method can overcome limitations of manual interpretation including interobserver variability and speed of interpretation and may enable real-time intraoperative margin assessment.
AB - Background: The purpose of this study was to develop a deep learning classification approach to distinguish cancerous from noncancerous regions within optical coherence tomography (OCT) images of breast tissue for potential use in an intraoperative setting for margin assessment. Methods: A custom ultrahigh-resolution OCT (UHR-OCT) system with an axial resolution of 2.7 μm and a lateral resolution of 5.5 μm was used in this study. The algorithm used an A-scan-based classification scheme and the convolutional neural network (CNN) was implemented using an 11-layer architecture consisting of serial 3 × 3 convolution kernels. Four tissue types were classified, including adipose, stroma, ductal carcinoma in situ, and invasive ductal carcinoma. Results: The binary classification of cancer versus noncancer with the proposed CNN achieved 94% accuracy, 96% sensitivity, and 92% specificity. The mean five-fold validation F1 score was highest for invasive ductal carcinoma (mean standard deviation, 0.89 ± 0.09) and adipose (0.79 ± 0.17), followed by stroma (0.74 ± 0.18), and ductal carcinoma in situ (0.65 ± 0.15). Conclusion: It is feasible to use CNN based algorithm to accurately distinguish cancerous regions in OCT images. This fully automated method can overcome limitations of manual interpretation including interobserver variability and speed of interpretation and may enable real-time intraoperative margin assessment.
KW - CNN
KW - Lumpectomy
KW - OCT
UR - http://www.scopus.com/inward/record.url?scp=85068935955&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85068935955&partnerID=8YFLogxK
U2 - 10.1016/j.acra.2019.06.018
DO - 10.1016/j.acra.2019.06.018
M3 - Article
C2 - 31324579
AN - SCOPUS:85068935955
SN - 1076-6332
VL - 27
SP - e81-e86
JO - Academic Radiology
JF - Academic Radiology
IS - 5
ER -