TY - GEN
T1 - Towards Fair and Robust Classification
AU - Sun, Haipei
AU - Wu, Kun
AU - Wang, Ting
AU - Wang, Wendy Hui
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Robustness and fairness are two equally important issues for machine learning systems. Despite the active research on robustness and fairness of ML recently, these efforts focus on either fairness or robustness, but not both. To bridge this gap, in this paper, we design Fair and Robust Classification (FRoC) models that equip the classification models with both fairness and robustness. Meeting both fairness and robustness constraints is not trivial due to the tension between them. The trade-off between fairness, robustness, and model accuracy also introduces additional challenge. To address these challenges, we design two FRoC methods, namely FRoC-PRE that modifies the input data before model training, and FRoC-IN that modifies the model with an adversarial objective function to address both fairness and robustness during training. FRoC-IN is suitable to the settings where the users (e.g., ML service providers) only have the access to the model but not the original data, while FRoC-PRE works for the settings where the users (e.g., data owners) have the access to both data and a surrogate model that may have similar architecture as the target model. Our extensive experiments on real-world datasets demonstrate that both FRoC-IN and FRoC-PRE can achieve both fairness and robustness with insignificant accuracy loss of the target model.
AB - Robustness and fairness are two equally important issues for machine learning systems. Despite the active research on robustness and fairness of ML recently, these efforts focus on either fairness or robustness, but not both. To bridge this gap, in this paper, we design Fair and Robust Classification (FRoC) models that equip the classification models with both fairness and robustness. Meeting both fairness and robustness constraints is not trivial due to the tension between them. The trade-off between fairness, robustness, and model accuracy also introduces additional challenge. To address these challenges, we design two FRoC methods, namely FRoC-PRE that modifies the input data before model training, and FRoC-IN that modifies the model with an adversarial objective function to address both fairness and robustness during training. FRoC-IN is suitable to the settings where the users (e.g., ML service providers) only have the access to the model but not the original data, while FRoC-PRE works for the settings where the users (e.g., data owners) have the access to both data and a surrogate model that may have similar architecture as the target model. Our extensive experiments on real-world datasets demonstrate that both FRoC-IN and FRoC-PRE can achieve both fairness and robustness with insignificant accuracy loss of the target model.
KW - Algorithmic fairness
KW - adversarial robustness
KW - classification
KW - trustworthy machine learning
UR - http://www.scopus.com/inward/record.url?scp=85134017587&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85134017587&partnerID=8YFLogxK
U2 - 10.1109/EuroSP53844.2022.00030
DO - 10.1109/EuroSP53844.2022.00030
M3 - Conference contribution
AN - SCOPUS:85134017587
T3 - Proceedings - 7th IEEE European Symposium on Security and Privacy, Euro S and P 2022
SP - 356
EP - 376
BT - Proceedings - 7th IEEE European Symposium on Security and Privacy, Euro S and P 2022
T2 - 7th IEEE European Symposium on Security and Privacy, Euro S and P 2022
Y2 - 6 June 2022 through 10 June 2022
ER -