TY - JOUR
T1 - Is the aspect ratio of cells important in deep learning? A robust comparison of deep learning methods for multi-scale cytopathology cell image classification
T2 - From convolutional neural networks to visual transformers
AU - Liu, Wanli
AU - Li, Chen
AU - Rahaman, Md Mamunur
AU - Jiang, Tao
AU - Sun, Hongzan
AU - Wu, Xiangchen
AU - Hu, Weiming
AU - Chen, Haoyuan
AU - Sun, Changhao
AU - Yao, Yudong
AU - Grzegorzek, Marcin
N1 - Publisher Copyright:
© 2021
PY - 2022/2
Y1 - 2022/2
N2 - Cervical cancer is a very common and fatal type of cancer in women. Cytopathology images are often used to screen for this cancer. Given that there is a possibility that many errors can occur during manual screening, a computer-aided diagnosis system based on deep learning has been developed. Deep learning methods require a fixed dimension of input images, but the dimensions of clinical medical images are inconsistent. The aspect ratios of the images suffer while resizing them directly. Clinically, the aspect ratios of cells inside cytopathological images provide important information for doctors to diagnose cancer. Therefore, it is difficult to resize directly. However, many existing studies have resized the images directly and have obtained highly robust classification results. To determine a reasonable interpretation, we have conducted a series of comparative experiments. First, the raw data of the SIPaKMeD dataset are pre-processed to obtain standard and scaled datasets. Then, the datasets are resized to 224 × 224 pixels. Finally, 22 deep learning models are used to classify the standard and scaled datasets. The results of the study indicate that deep learning models are robust to changes in the aspect ratio of cells in cervical cytopathological images. This conclusion is also validated via the Herlev dataset.
AB - Cervical cancer is a very common and fatal type of cancer in women. Cytopathology images are often used to screen for this cancer. Given that there is a possibility that many errors can occur during manual screening, a computer-aided diagnosis system based on deep learning has been developed. Deep learning methods require a fixed dimension of input images, but the dimensions of clinical medical images are inconsistent. The aspect ratios of the images suffer while resizing them directly. Clinically, the aspect ratios of cells inside cytopathological images provide important information for doctors to diagnose cancer. Therefore, it is difficult to resize directly. However, many existing studies have resized the images directly and have obtained highly robust classification results. To determine a reasonable interpretation, we have conducted a series of comparative experiments. First, the raw data of the SIPaKMeD dataset are pre-processed to obtain standard and scaled datasets. Then, the datasets are resized to 224 × 224 pixels. Finally, 22 deep learning models are used to classify the standard and scaled datasets. The results of the study indicate that deep learning models are robust to changes in the aspect ratio of cells in cervical cytopathological images. This conclusion is also validated via the Herlev dataset.
KW - Aspect ratio of cells
KW - Cervical cancer
KW - Deep learning
KW - Pap smear
KW - Robustness comparison
KW - Visual transformer
UR - http://www.scopus.com/inward/record.url?scp=85119963488&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85119963488&partnerID=8YFLogxK
U2 - 10.1016/j.compbiomed.2021.105026
DO - 10.1016/j.compbiomed.2021.105026
M3 - Article
C2 - 34801245
AN - SCOPUS:85119963488
SN - 0010-4825
VL - 141
JO - Computers in Biology and Medicine
JF - Computers in Biology and Medicine
M1 - 105026
ER -