TY - GEN
T1 - Understanding Disparate Effects of Membership Inference Attacks and their Countermeasures
AU - Zhong, Da
AU - Sun, Haipei
AU - Xu, Jun
AU - Gong, Neil
AU - Wang, Wendy Hui
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/5/30
Y1 - 2022/5/30
N2 - Machine learning algorithms, when applied to sensitive data, can pose severe threats to privacy. A growing body of prior work has demonstrated that membership inference attack (MIA) can disclose whether specific private data samples are present in the training data to an attacker. However, most existing studies on MIA focus on aggregated privacy leakage for an entire population, while leaving privacy leakage across different demographic subgroups (e.g., females and males) in the population largely unexplored. This raises two important issues: (1) privacy unfairness (i.e., if some subgroups are more vulnerable to MIAs than the others); and (2) defense unfairness (i.e., if the defense mechanisms provide more protection to some particular subgroups than the others). In this paper, we investigate both privacy unfairness and defense fairness.We formalize a new notation of privacy-leakage disparity (PLD), which quantifies the disparate privacy leakage of machine learning models to MIA across different subgroups. In terms of privacy unfairness, our empirical analysis of PLD on real-world datasets shows that privacy unfairness exists. The minority subgroups (i.e., the less represented subgroups) tend to have higher privacy leakage. We analyze how subgroup size and subgroup data distribution impact PLD through the lens of model memorization. In terms of defense unfairness, our empirical evaluation shows the existence of unfairness of three state-of-the-art defenses, namely differential privacy, L2-regularizer, and Dropout, against MIA. However, defense unfairness mitigates privacy unfairness as the minority subgroups receive stronger protection than the others. We analyze how the three defense mechanisms affect subgroup data distribution disparately and thus leads to defense unfairness.
AB - Machine learning algorithms, when applied to sensitive data, can pose severe threats to privacy. A growing body of prior work has demonstrated that membership inference attack (MIA) can disclose whether specific private data samples are present in the training data to an attacker. However, most existing studies on MIA focus on aggregated privacy leakage for an entire population, while leaving privacy leakage across different demographic subgroups (e.g., females and males) in the population largely unexplored. This raises two important issues: (1) privacy unfairness (i.e., if some subgroups are more vulnerable to MIAs than the others); and (2) defense unfairness (i.e., if the defense mechanisms provide more protection to some particular subgroups than the others). In this paper, we investigate both privacy unfairness and defense fairness.We formalize a new notation of privacy-leakage disparity (PLD), which quantifies the disparate privacy leakage of machine learning models to MIA across different subgroups. In terms of privacy unfairness, our empirical analysis of PLD on real-world datasets shows that privacy unfairness exists. The minority subgroups (i.e., the less represented subgroups) tend to have higher privacy leakage. We analyze how subgroup size and subgroup data distribution impact PLD through the lens of model memorization. In terms of defense unfairness, our empirical evaluation shows the existence of unfairness of three state-of-the-art defenses, namely differential privacy, L2-regularizer, and Dropout, against MIA. However, defense unfairness mitigates privacy unfairness as the minority subgroups receive stronger protection than the others. We analyze how the three defense mechanisms affect subgroup data distribution disparately and thus leads to defense unfairness.
KW - disparity
KW - fairness
KW - membership inference attack
KW - privacy leakage
UR - http://www.scopus.com/inward/record.url?scp=85133175205&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85133175205&partnerID=8YFLogxK
U2 - 10.1145/3488932.3501279
DO - 10.1145/3488932.3501279
M3 - Conference contribution
AN - SCOPUS:85133175205
T3 - ASIA CCS 2022 - Proceedings of the 2022 ACM Asia Conference on Computer and Communications Security
SP - 959
EP - 974
BT - ASIA CCS 2022 - Proceedings of the 2022 ACM Asia Conference on Computer and Communications Security
T2 - 17th ACM ASIA Conference on Computer and Communications Security 2022, ASIA CCS 2022
Y2 - 30 May 2022 through 3 June 2022
ER -