TY - JOUR
T1 - Confederated Learning
T2 - Federated Learning With Decentralized Edge Servers
AU - Wang, Bin
AU - Fang, Jun
AU - Li, Hongbin
AU - Yuan, Xiaojun
AU - Ling, Qing
N1 - Publisher Copyright:
© 1991-2012 IEEE.
PY - 2023
Y1 - 2023
N2 - Federated learning (FL) is an emerging machine learning paradigm that allows to accomplish model training without aggregating data at a central server. Most studies on FL consider a centralized framework, in which a single server is endowed with a central authority to coordinate a number of devices to perform model training in an iterative manner. Due to stringent communication and bandwidth constraints, such a centralized framework has limited scalability as the number of devices grows. To address this issue, in this paper, we propose a ConFederated Learning (CFL) framework. The proposed CFL consists of multiple servers, in which each server is connected with an individual set of devices as in the conventional FL framework, and decentralized collaboration is leveraged among servers to make full use of the data dispersed throughout the network. We develop a stochastic alternating direction method of multipliers (ADMM) algorithm for CFL. The proposed algorithm employs a random scheduling policy which randomly selects a subset of devices to access their respective servers at each iteration, thus alleviating the need of uploading a huge amount of information from devices to servers. Theoretical analysis is presented to justify the proposed method. Numerical results show that the proposed method can converge to a decent solution significantly faster than gradient-based FL algorithms, thus boasting a substantial advantage in terms of communication efficiency.
AB - Federated learning (FL) is an emerging machine learning paradigm that allows to accomplish model training without aggregating data at a central server. Most studies on FL consider a centralized framework, in which a single server is endowed with a central authority to coordinate a number of devices to perform model training in an iterative manner. Due to stringent communication and bandwidth constraints, such a centralized framework has limited scalability as the number of devices grows. To address this issue, in this paper, we propose a ConFederated Learning (CFL) framework. The proposed CFL consists of multiple servers, in which each server is connected with an individual set of devices as in the conventional FL framework, and decentralized collaboration is leveraged among servers to make full use of the data dispersed throughout the network. We develop a stochastic alternating direction method of multipliers (ADMM) algorithm for CFL. The proposed algorithm employs a random scheduling policy which randomly selects a subset of devices to access their respective servers at each iteration, thus alleviating the need of uploading a huge amount of information from devices to servers. Theoretical analysis is presented to justify the proposed method. Numerical results show that the proposed method can converge to a decent solution significantly faster than gradient-based FL algorithms, thus boasting a substantial advantage in terms of communication efficiency.
KW - ADMM
KW - Confederated learning
KW - random scheduling
UR - http://www.scopus.com/inward/record.url?scp=85148434284&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85148434284&partnerID=8YFLogxK
U2 - 10.1109/TSP.2023.3241768
DO - 10.1109/TSP.2023.3241768
M3 - Article
AN - SCOPUS:85148434284
SN - 1053-587X
VL - 71
SP - 248
EP - 263
JO - IEEE Transactions on Signal Processing
JF - IEEE Transactions on Signal Processing
ER -