TY - JOUR
T1 - Explainability and AI Confidence in Clinical Decision Support Systems
T2 - Effects on Trust, Diagnostic Performance, and Cognitive Load in Breast Cancer Care
AU - Rezaeian, Olya
AU - Bayrak, Alparslan Emrah
AU - Asan, Onur
N1 - Publisher Copyright:
© 2025 Taylor & Francis Group, LLC.
PY - 2025
Y1 - 2025
N2 - Artificial Intelligence (AI) has demonstrated potential in healthcare, particularly in enhancing diagnostic accuracy and decision-making through Clinical Decision Support Systems (CDSSs). However, the successful implementation of these systems relies on user trust and reliance, which can be influenced by explainable AI. This study explores the impact of varying explainability levels on clinicians’ trust, cognitive load, and diagnostic performance in breast cancer detection. Utilizing an interrupted time series design, we conducted a web-based experiment involving 28 healthcare professionals. The results revealed that high confidence scores substantially increased trust but also led to overreliance, reducing diagnostic accuracy. In contrast, low confidence scores decreased trust and agreement while increasing diagnosis duration, reflecting more cautious behavior. Some explainability features influenced cognitive load by increasing stress levels. Additionally, demographic factors such as age, gender, and professional role shaped participants’ perceptions and interactions with the system. This study provides valuable insights into how explainability impact clinicians’ behavior and decision-making. The findings highlight the importance of designing AI-driven CDSSs that balance transparency, usability, and cognitive demands to foster trust and improve integration into clinical workflows.
AB - Artificial Intelligence (AI) has demonstrated potential in healthcare, particularly in enhancing diagnostic accuracy and decision-making through Clinical Decision Support Systems (CDSSs). However, the successful implementation of these systems relies on user trust and reliance, which can be influenced by explainable AI. This study explores the impact of varying explainability levels on clinicians’ trust, cognitive load, and diagnostic performance in breast cancer detection. Utilizing an interrupted time series design, we conducted a web-based experiment involving 28 healthcare professionals. The results revealed that high confidence scores substantially increased trust but also led to overreliance, reducing diagnostic accuracy. In contrast, low confidence scores decreased trust and agreement while increasing diagnosis duration, reflecting more cautious behavior. Some explainability features influenced cognitive load by increasing stress levels. Additionally, demographic factors such as age, gender, and professional role shaped participants’ perceptions and interactions with the system. This study provides valuable insights into how explainability impact clinicians’ behavior and decision-making. The findings highlight the importance of designing AI-driven CDSSs that balance transparency, usability, and cognitive demands to foster trust and improve integration into clinical workflows.
KW - AI confidence score
KW - AI-assisted decision making
KW - Clinical decision support systems
KW - cognitive load
KW - explainability
UR - https://www.scopus.com/pages/publications/105012626502
UR - https://www.scopus.com/pages/publications/105012626502#tab=citedBy
U2 - 10.1080/10447318.2025.2539458
DO - 10.1080/10447318.2025.2539458
M3 - Article
AN - SCOPUS:105012626502
SN - 1044-7318
JO - International Journal of Human-Computer Interaction
JF - International Journal of Human-Computer Interaction
ER -