Deep Models under the GAN: Information leakage from collaborative deep learning

Briland Hitaj, Giuseppe Ateniese, Fernando Perez-Cruz

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1050 Scopus citations

Abstract

Deep Learning has recently become hugely popular in machine learning for its ability to solve end-to-end learning systems, in which the features and the classifiers are learned simultaneously, providing significant improvements in classification accuracy in the presence of highly-structured and large databases. Its success is due to a combination of recent algorithmic breakthroughs, increasingly powerful computers, and access to significant amounts of data. Researchers have also considered privacy implications of deep learning. Models are typically trained in a centralized manner with all the data being processed by the same training algorithm. If the data is a collection of users' private data, including habits, personal pictures, geographical positions, interests, and more, the centralized server will have access to sensitive information that could potentially be mishandled. To tackle this problem, collaborative deep learning models have recently been proposed where parties locally train their deep learning structures and only share a subset of the parameters in the attempt to keep their respective training sets private. Parameters can also be obfuscated via differential privacy (DP) to make information extraction even more challenging, as proposed by Shokri and Shmatikov at CCS'15. Unfortunately, we show that any privacy-preserving collaborative deep learning is susceptible to a powerful attack that we devise in this paper. In particular, we show that a distributed, federated, or decentralized deep learning approach is fundamentally broken and does not protect the training sets of honest participants. The attack we developed exploits the real-time nature of the learning process that allows the adversary to train a Generative Adversarial Network (GAN) that generates prototypical samples of the targeted training set that was meant to be private (the samples generated by the GAN are intended to come from the same distribution as the training data). Interestingly, we show that record-level differential privacy applied to the shared parameters of the model, as suggested in previous work, is ineffective (i.e., record-level DP is not designed to address our attack).

Original languageEnglish
Title of host publicationCCS 2017 - Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security
Pages603-618
Number of pages16
ISBN (Electronic)9781450349468
DOIs
StatePublished - 30 Oct 2017
Event24th ACM SIGSAC Conference on Computer and Communications Security, CCS 2017 - Dallas, United States
Duration: 30 Oct 20173 Nov 2017

Publication series

NameProceedings of the ACM Conference on Computer and Communications Security
ISSN (Print)1543-7221

Conference

Conference24th ACM SIGSAC Conference on Computer and Communications Security, CCS 2017
Country/TerritoryUnited States
CityDallas
Period30/10/173/11/17

Keywords

  • Collaborative Learning
  • Deep Learning
  • Privacy
  • Security

Fingerprint

Dive into the research topics of 'Deep Models under the GAN: Information leakage from collaborative deep learning'. Together they form a unique fingerprint.

Cite this