Understanding Disparate Effects of Membership Inference Attacks and their Countermeasures

Da Zhong, Haipei Sun, Jun Xu, Neil Gong, Wendy Hui Wang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

10 Scopus citations

Abstract

Machine learning algorithms, when applied to sensitive data, can pose severe threats to privacy. A growing body of prior work has demonstrated that membership inference attack (MIA) can disclose whether specific private data samples are present in the training data to an attacker. However, most existing studies on MIA focus on aggregated privacy leakage for an entire population, while leaving privacy leakage across different demographic subgroups (e.g., females and males) in the population largely unexplored. This raises two important issues: (1) privacy unfairness (i.e., if some subgroups are more vulnerable to MIAs than the others); and (2) defense unfairness (i.e., if the defense mechanisms provide more protection to some particular subgroups than the others). In this paper, we investigate both privacy unfairness and defense fairness.We formalize a new notation of privacy-leakage disparity (PLD), which quantifies the disparate privacy leakage of machine learning models to MIA across different subgroups. In terms of privacy unfairness, our empirical analysis of PLD on real-world datasets shows that privacy unfairness exists. The minority subgroups (i.e., the less represented subgroups) tend to have higher privacy leakage. We analyze how subgroup size and subgroup data distribution impact PLD through the lens of model memorization. In terms of defense unfairness, our empirical evaluation shows the existence of unfairness of three state-of-the-art defenses, namely differential privacy, L2-regularizer, and Dropout, against MIA. However, defense unfairness mitigates privacy unfairness as the minority subgroups receive stronger protection than the others. We analyze how the three defense mechanisms affect subgroup data distribution disparately and thus leads to defense unfairness.

Original languageEnglish
Title of host publicationASIA CCS 2022 - Proceedings of the 2022 ACM Asia Conference on Computer and Communications Security
Pages959-974
Number of pages16
ISBN (Electronic)9781450391405
DOIs
StatePublished - 30 May 2022
Event17th ACM ASIA Conference on Computer and Communications Security 2022, ASIA CCS 2022 - Virtual, Online, Japan
Duration: 30 May 20223 Jun 2022

Publication series

NameASIA CCS 2022 - Proceedings of the 2022 ACM Asia Conference on Computer and Communications Security

Conference

Conference17th ACM ASIA Conference on Computer and Communications Security 2022, ASIA CCS 2022
Country/TerritoryJapan
CityVirtual, Online
Period30/05/223/06/22

Keywords

  • disparity
  • fairness
  • membership inference attack
  • privacy leakage

Fingerprint

Dive into the research topics of 'Understanding Disparate Effects of Membership Inference Attacks and their Countermeasures'. Together they form a unique fingerprint.

Cite this