TY - GEN
T1 - Providing Item-side Individual Fairness for Deep Recommender Systems
AU - Wang, Xiuling
AU - Wang, Wendy Hui
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/6/21
Y1 - 2022/6/21
N2 - Recent advent of deep learning techniques have reinforced the development of new recommender systems. Although these systems have been demonstrated as efficient and effective, the issue of item popularity bias in these recommender systems has raised serious concerns. While most of the existing works focus on group fairness at item side, individual fairness at item side is left largely unexplored. To address this issue, in this paper, first, we define a new notion of individual fairness from the perspective of items, namely (α, β)-fairness, to deal with item popularity bias in recommendations. In particular, (α, β)-fairness requires that similar items should receive similar coverage in the recommendations, where α and β control item similarity and coverage similarity respectively, and both item and coverage similarity metrics are defined as task specific for deep recommender systems. Next, we design two bias mitigation methods, namely embedding-based re-ranking (ER) and greedy substitution (GS), for deep recommender systems. ER is an in-processing mitigation method that equips (α, β)-fairness as a constraint to the objective function of the recommendation algorithm, while GS is a post-processing approach that accepts the biased recommendations as the input, and substitutes high-coverage items with low-coverage ones in the recommendations to satisfy (α, β)-fairness. We evaluate the performance of both mitigation algorithms on two real-world datasets and a set of state-of-the-art deep recommender systems. Our results demonstrate that both ER and GS outperform the existing minimum-coverage (MC) mitigation solutions [Koutsopoulos and Halkidi 2018; Patro et al. 2020] in terms of both fairness and accuracy of recommendations. Furthermore, ER delivers the best trade-off between fairness and recommendation accuracy among a set of alternative mitigation methods, including GS, the hybrid of ER and GS, and the existing MC solutions.
AB - Recent advent of deep learning techniques have reinforced the development of new recommender systems. Although these systems have been demonstrated as efficient and effective, the issue of item popularity bias in these recommender systems has raised serious concerns. While most of the existing works focus on group fairness at item side, individual fairness at item side is left largely unexplored. To address this issue, in this paper, first, we define a new notion of individual fairness from the perspective of items, namely (α, β)-fairness, to deal with item popularity bias in recommendations. In particular, (α, β)-fairness requires that similar items should receive similar coverage in the recommendations, where α and β control item similarity and coverage similarity respectively, and both item and coverage similarity metrics are defined as task specific for deep recommender systems. Next, we design two bias mitigation methods, namely embedding-based re-ranking (ER) and greedy substitution (GS), for deep recommender systems. ER is an in-processing mitigation method that equips (α, β)-fairness as a constraint to the objective function of the recommendation algorithm, while GS is a post-processing approach that accepts the biased recommendations as the input, and substitutes high-coverage items with low-coverage ones in the recommendations to satisfy (α, β)-fairness. We evaluate the performance of both mitigation algorithms on two real-world datasets and a set of state-of-the-art deep recommender systems. Our results demonstrate that both ER and GS outperform the existing minimum-coverage (MC) mitigation solutions [Koutsopoulos and Halkidi 2018; Patro et al. 2020] in terms of both fairness and accuracy of recommendations. Furthermore, ER delivers the best trade-off between fairness and recommendation accuracy among a set of alternative mitigation methods, including GS, the hybrid of ER and GS, and the existing MC solutions.
KW - Individual fairness
KW - algorithmic fairness in machine learning
KW - deep recommender systems
UR - http://www.scopus.com/inward/record.url?scp=85133013238&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85133013238&partnerID=8YFLogxK
U2 - 10.1145/3531146.3533079
DO - 10.1145/3531146.3533079
M3 - Conference contribution
AN - SCOPUS:85133013238
T3 - ACM International Conference Proceeding Series
SP - 117
EP - 127
BT - Proceedings of 2022 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022
T2 - 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022
Y2 - 21 June 2022 through 24 June 2022
ER -