TY - GEN
T1 - Integrity Verifiable Privacy-preserving Federated Learning for Healthcare-IoT
AU - Li, Jiarui
AU - Yu, Shucheng
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - In Healthcare Internet of Things, federated learning has emerged as a promising distributed machine learning paradigm, enabling multiple clients to collaboratively train models with huge amounts of medical data while preserving the privacy of sensitive information. Despite its advantages, federated learning faces significant challenges in maintaining the integrity of the global model due to the potential for data and model poisoning attacks. These attacks are exacerbated by the lack of direct oversight in the local training processes, allowing malicious participants to manipulate model updates. This paper introduces Integrity Verifiable Federated Learning (IV-FED), a novel framework that leverages trusted execution environments (TEEs) to ensure the integrity of the training process without compromising privacy. IV-FED employs an accumulator-based integrity verification protocol, allowing the central server to verify the correctness of local training without reproducing the entire training process. Additionally, the framework incorporates an adversarial perturbation-based detection mechanism to prevent the injection of poisoned data by malicious participants.
AB - In Healthcare Internet of Things, federated learning has emerged as a promising distributed machine learning paradigm, enabling multiple clients to collaboratively train models with huge amounts of medical data while preserving the privacy of sensitive information. Despite its advantages, federated learning faces significant challenges in maintaining the integrity of the global model due to the potential for data and model poisoning attacks. These attacks are exacerbated by the lack of direct oversight in the local training processes, allowing malicious participants to manipulate model updates. This paper introduces Integrity Verifiable Federated Learning (IV-FED), a novel framework that leverages trusted execution environments (TEEs) to ensure the integrity of the training process without compromising privacy. IV-FED employs an accumulator-based integrity verification protocol, allowing the central server to verify the correctness of local training without reproducing the entire training process. Additionally, the framework incorporates an adversarial perturbation-based detection mechanism to prevent the injection of poisoned data by malicious participants.
KW - federated learning
KW - poisoning attack
KW - trusted execution environments
UR - https://www.scopus.com/pages/publications/85219609029
UR - https://www.scopus.com/inward/citedby.url?scp=85219609029&partnerID=8YFLogxK
U2 - 10.1109/HEALTHCOM60970.2024.10880846
DO - 10.1109/HEALTHCOM60970.2024.10880846
M3 - Conference contribution
AN - SCOPUS:85219609029
T3 - 2024 IEEE International Conference on E-Health Networking, Application and Services, HealthCom 2024
BT - 2024 IEEE International Conference on E-Health Networking, Application and Services, HealthCom 2024
T2 - 2024 IEEE International Conference on E-Health Networking, Application and Services, HealthCom 2024
Y2 - 18 November 2024 through 20 November 2024
ER -