TY - GEN
T1 - Private, Yet Practical, Multiparty Deep Learning
AU - Zhang, Xinyang
AU - Ji, Shouling
AU - Wang, Hui
AU - Wang, Ting
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2017/7/13
Y1 - 2017/7/13
N2 - In this paper, we consider the problem of multiparty deep learning (MDL), wherein autonomous data owners jointly train accurate deep neural network models without sharing their private data. We design, implement, and evaluate ∝MDL, a new MDL paradigm built upon three primitives: asynchronous optimization, lightweight homomorphic encryption, and threshold secret sharing. Compared with prior work, ∝MDL departs in significant ways: a) besides providing explicit privacy guarantee, it retains desirable model utility, which is paramount for accuracy-critical domains; b) it provides an intuitive handle for the operator to gracefully balance model utility and training efficiency; c) moreover, it supports delicate control over communication and computational costs by offering two variants, operating under loose and tight coordination respectively, thus optimizable for given system settings (e.g., limited versus sufficient network bandwidth). Through extensive empirical evaluation using benchmark datasets and deep learning architectures, we demonstrate the efficacy of ∝MDL.
AB - In this paper, we consider the problem of multiparty deep learning (MDL), wherein autonomous data owners jointly train accurate deep neural network models without sharing their private data. We design, implement, and evaluate ∝MDL, a new MDL paradigm built upon three primitives: asynchronous optimization, lightweight homomorphic encryption, and threshold secret sharing. Compared with prior work, ∝MDL departs in significant ways: a) besides providing explicit privacy guarantee, it retains desirable model utility, which is paramount for accuracy-critical domains; b) it provides an intuitive handle for the operator to gracefully balance model utility and training efficiency; c) moreover, it supports delicate control over communication and computational costs by offering two variants, operating under loose and tight coordination respectively, thus optimizable for given system settings (e.g., limited versus sufficient network bandwidth). Through extensive empirical evaluation using benchmark datasets and deep learning architectures, we demonstrate the efficacy of ∝MDL.
KW - Deep neural network
KW - Federated learning
KW - Privacy preservation
UR - http://www.scopus.com/inward/record.url?scp=85027261196&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85027261196&partnerID=8YFLogxK
U2 - 10.1109/ICDCS.2017.215
DO - 10.1109/ICDCS.2017.215
M3 - Conference contribution
AN - SCOPUS:85027261196
T3 - Proceedings - International Conference on Distributed Computing Systems
SP - 1442
EP - 1452
BT - Proceedings - IEEE 37th International Conference on Distributed Computing Systems, ICDCS 2017
A2 - Lee, Kisung
A2 - Liu, Ling
T2 - 37th IEEE International Conference on Distributed Computing Systems, ICDCS 2017
Y2 - 5 June 2017 through 8 June 2017
ER -