TY - GEN
T1 - AXAI-CDSS
T2 - 2025 International Conference on Activity and Behavior Computing, ABC 2025
AU - Zhang, Tongze
AU - Chung, Tammy
AU - Dey, Anind
AU - Bae, Sang Won
N1 - Publisher Copyright:
©2025 IEEE.
PY - 2025
Y1 - 2025
N2 - As cannabis use has increased in recent years, researchers have come to rely on sophisticated machine learning models to predict cannabis use behavior and its impact on health. However, many artificial intelligence (AI) models lack transparency and interpretability due to their opaque nature, limiting their trust and adoption in real-world medical applications, such as clinical decision support systems (CDSS). To address this issue, this paper enhances algorithm explainability underlying CDSS by integrating multiple Explainable Artificial Intelligence (XAI) methods and applying causal inference techniques to clarify the models’ predictive decisions under various scenarios. By providing deeper interpretability of the XAI outputs using Large Language Models (LLMs), we provide users with more personalized and accessible insights to overcome the challenges posed by AI’s “black box” nature. Our system dynamically adjusts feedback based on user queries and emotional states, combining text-based sentiment analysis with real-time facial emotion recognition to ensure responses are empathetic, context-adaptive, and user-centered. This approach bridges the gap between the learning demands of interpretability and the need for intuitive understanding, enabling non-technical users such as clinicians and clinical researchers to interact effectively with AI models. Ultimately, this approach improves usability, enhances perceived trustworthiness, and increases the impact of CDSS in healthcare applications.
AB - As cannabis use has increased in recent years, researchers have come to rely on sophisticated machine learning models to predict cannabis use behavior and its impact on health. However, many artificial intelligence (AI) models lack transparency and interpretability due to their opaque nature, limiting their trust and adoption in real-world medical applications, such as clinical decision support systems (CDSS). To address this issue, this paper enhances algorithm explainability underlying CDSS by integrating multiple Explainable Artificial Intelligence (XAI) methods and applying causal inference techniques to clarify the models’ predictive decisions under various scenarios. By providing deeper interpretability of the XAI outputs using Large Language Models (LLMs), we provide users with more personalized and accessible insights to overcome the challenges posed by AI’s “black box” nature. Our system dynamically adjusts feedback based on user queries and emotional states, combining text-based sentiment analysis with real-time facial emotion recognition to ensure responses are empathetic, context-adaptive, and user-centered. This approach bridges the gap between the learning demands of interpretability and the need for intuitive understanding, enabling non-technical users such as clinicians and clinical researchers to interact effectively with AI models. Ultimately, this approach improves usability, enhances perceived trustworthiness, and increases the impact of CDSS in healthcare applications.
KW - Affective Computing
KW - Algorithmic Decisions
KW - Cannabis Intoxication
KW - Cannabis Use Disorder
KW - Cannabis-Intoxicated Behaviors
KW - Causal Inference
KW - Clinical Decision Support Systems (CDSS)
KW - Explainable Artificial Intelligence (XAI)
KW - Facial Emotion Recognition
KW - Healthcare AI
KW - Large Language Models (LLMs)
KW - Passive Sensing
KW - Personalized Intervention
KW - Transparency
KW - Trustworthy AI
UR - https://www.scopus.com/pages/publications/105015566331
UR - https://www.scopus.com/pages/publications/105015566331#tab=citedBy
U2 - 10.1109/ABC64332.2025.11118599
DO - 10.1109/ABC64332.2025.11118599
M3 - Conference contribution
AN - SCOPUS:105015566331
T3 - 2025 International Conference on Activity and Behavior Computing, ABC 2025
BT - 2025 International Conference on Activity and Behavior Computing, ABC 2025
Y2 - 21 April 2025 through 25 April 2025
ER -