TY - GEN
T1 - Towards Explainability in mHealth Application for Mitigation of Forward Head Posture in Smartphone Users
AU - Oyeleke, Richard O.
AU - Sorinolu, Babafemi G.
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Machine learning (ML) algorithms have recorded tremendous successes in many areas, notably healthcare. With increasing computing power of mobile devices, mobile health (mHealth) applications are embedded with ML models to learn users behavior and influence positive lifestyle changes. Although ML algorithms have shown impressive predictive power over the years, nonetheless, it is necessary that their inferences and recommendations are also explainable. Explainability can promote users' trust, particularly when ML algorithms are deployed in high-stake domains such as healthcare. In this study, first, we present our proposed situation-aware mobile application called Smarttens coach app that we developed to assist smartphone users in mitigating forward head posture. It embeds an efficientNet CNN model to predict forward head posture in smartphone users by analyzing head posture images of the users. Our Smarttens coach app achieved a state-of-the-art accuracy score of 0.99. However, accuracy score alone does not tell users the whole story about how Smarttens coach app draws its inference on predicted posture binary class. This lack of explanation to justify the predicted posture class label could negatively impact users' trust in the efficacy of the app. Therefore, we further validated our Smarttens coach app posture prediction efficacy by leveraging an explainable AI (XAI) framework called LIME to generate visual explanations for users' predicted head posture class label.
AB - Machine learning (ML) algorithms have recorded tremendous successes in many areas, notably healthcare. With increasing computing power of mobile devices, mobile health (mHealth) applications are embedded with ML models to learn users behavior and influence positive lifestyle changes. Although ML algorithms have shown impressive predictive power over the years, nonetheless, it is necessary that their inferences and recommendations are also explainable. Explainability can promote users' trust, particularly when ML algorithms are deployed in high-stake domains such as healthcare. In this study, first, we present our proposed situation-aware mobile application called Smarttens coach app that we developed to assist smartphone users in mitigating forward head posture. It embeds an efficientNet CNN model to predict forward head posture in smartphone users by analyzing head posture images of the users. Our Smarttens coach app achieved a state-of-the-art accuracy score of 0.99. However, accuracy score alone does not tell users the whole story about how Smarttens coach app draws its inference on predicted posture binary class. This lack of explanation to justify the predicted posture class label could negatively impact users' trust in the efficacy of the app. Therefore, we further validated our Smarttens coach app posture prediction efficacy by leveraging an explainable AI (XAI) framework called LIME to generate visual explanations for users' predicted head posture class label.
KW - Explainable AI
KW - efficientNet CNN
KW - forward head posture
KW - mHealth
KW - physiatry
KW - smartphone
UR - http://www.scopus.com/inward/record.url?scp=85146282739&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85146282739&partnerID=8YFLogxK
U2 - 10.1109/HealthCom54947.2022.9982740
DO - 10.1109/HealthCom54947.2022.9982740
M3 - Conference contribution
AN - SCOPUS:85146282739
T3 - 2022 IEEE International Conference on E-Health Networking, Application and Services, HealthCom 2022
SP - 49
EP - 55
BT - 2022 IEEE International Conference on E-Health Networking, Application and Services, HealthCom 2022
T2 - 2022 IEEE International Conference on E-health Networking, Application and Services, HealthCom 2022
Y2 - 17 October 2022 through 19 October 2022
ER -