TY - GEN
T1 - An Architecture to Support Graduated Levels of Trust for Cancer Diagnosis with AI
AU - Rezaeian, Olya
AU - Bayrak, Alparslan Emrah
AU - Asan, Onur
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
PY - 2024
Y1 - 2024
N2 - Our research addresses the critical challenge of building trust in Artificial Intelligence (AI) for Clinical Decision Support Systems (CDSS), focusing on breast cancer diagnosis. It is difficult for clinicians to trust AI-generated recommendations due to a lack of explanations by the AI especially when diagnosing life threatening diseases such as breast cancer. To tackle this, we propose a dual-stage AI model combining U-Net architecture for image segmentation and Convolutional Neural Networks (CNN) for cancer prediction. This model operates on breast cancer tissue images and introduces four levels of explainability: basic classification, probability distribution, tumor localization, and advanced tumor localization with varying confidence levels. These levels are designed to offer increasing detail about diagnostic suggestions, aiming to study the effect of different explanation types on clinicians’ trust in the AI system. Our methodology encompasses the development of explanation mechanisms and their application in experimental settings to evaluate their impact on enhancing clinician trust in AI. This initiative seeks to bridge the gap between AI capabilities and clinician acceptance by improving the transparency and usefulness of AI in healthcare. Ultimately, our work aims to contribute to better patient outcomes and increased efficiency in healthcare delivery by facilitating the integration of explainable AI into clinical practice.
AB - Our research addresses the critical challenge of building trust in Artificial Intelligence (AI) for Clinical Decision Support Systems (CDSS), focusing on breast cancer diagnosis. It is difficult for clinicians to trust AI-generated recommendations due to a lack of explanations by the AI especially when diagnosing life threatening diseases such as breast cancer. To tackle this, we propose a dual-stage AI model combining U-Net architecture for image segmentation and Convolutional Neural Networks (CNN) for cancer prediction. This model operates on breast cancer tissue images and introduces four levels of explainability: basic classification, probability distribution, tumor localization, and advanced tumor localization with varying confidence levels. These levels are designed to offer increasing detail about diagnostic suggestions, aiming to study the effect of different explanation types on clinicians’ trust in the AI system. Our methodology encompasses the development of explanation mechanisms and their application in experimental settings to evaluate their impact on enhancing clinician trust in AI. This initiative seeks to bridge the gap between AI capabilities and clinician acceptance by improving the transparency and usefulness of AI in healthcare. Ultimately, our work aims to contribute to better patient outcomes and increased efficiency in healthcare delivery by facilitating the integration of explainable AI into clinical practice.
KW - Artificial Intelligence (AI)
KW - Breast Cancer
KW - Clinical Decision Support Systems (CDSS)
KW - Explainablity
KW - Trust
UR - http://www.scopus.com/inward/record.url?scp=85195535476&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85195535476&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-61966-3_37
DO - 10.1007/978-3-031-61966-3_37
M3 - Conference contribution
AN - SCOPUS:85195535476
SN - 9783031619656
T3 - Communications in Computer and Information Science
SP - 344
EP - 351
BT - HCI International 2024 Posters - 26th International Conference on Human-Computer Interaction, HCII 2024, Proceedings
A2 - Stephanidis, Constantine
A2 - Antona, Margherita
A2 - Ntoa, Stavroula
A2 - Salvendy, Gavriel
T2 - 26th International Conference on Human-Computer Interaction, HCII 2024
Y2 - 29 June 2024 through 4 July 2024
ER -