TY - GEN
T1 - Attacking Graph-Based Classification without Changing Existing Connections
AU - Xu, Xuening
AU - Du, Xiaojiang
AU - Zeng, Qiang
N1 - Publisher Copyright:
© 2020 ACM.
PY - 2020/12/7
Y1 - 2020/12/7
N2 - In recent years, with the rapid development of machine learning in various domains, more and more studies have shown that machine learning models are vulnerable to adversarial attacks. However, most existing researches on adversarial machine learning study non-graph data, such as images and text. Though some previous works on graph data have shown that adversaries can make graph-based classification methods unreliable by adding perturbations to features or adjacency matrices of existing nodes, these kinds of attacks sometimes have limitations for real-world applications. For example, to launch such attacks in real social networks, the attacker cannot force two good users to change (e.g., remove) the connection between them, which means that the attacker can not launch such attacks. In this paper, we propose a novel attack on collective classification methods by adding fake nodes into existing graphs. Our attack is more realistic and practical than the attack mentioned above. For instance, in a real social network, an attacker only needs to create some fake accounts and connect them to existing users without modifying the connections among existing users. We formulate the new attack as an optimization problem and utilize a gradient-based method to generate edges of newly added fake nodes. Our extensive experiments show that the attack can not only make new fake nodes evade detection, but also make the detector misclassify most of the target nodes. The proposed new attack is very effective and can achieve up to 100% False Negative Rates (FNRs) for both the new node set and the target node set.
AB - In recent years, with the rapid development of machine learning in various domains, more and more studies have shown that machine learning models are vulnerable to adversarial attacks. However, most existing researches on adversarial machine learning study non-graph data, such as images and text. Though some previous works on graph data have shown that adversaries can make graph-based classification methods unreliable by adding perturbations to features or adjacency matrices of existing nodes, these kinds of attacks sometimes have limitations for real-world applications. For example, to launch such attacks in real social networks, the attacker cannot force two good users to change (e.g., remove) the connection between them, which means that the attacker can not launch such attacks. In this paper, we propose a novel attack on collective classification methods by adding fake nodes into existing graphs. Our attack is more realistic and practical than the attack mentioned above. For instance, in a real social network, an attacker only needs to create some fake accounts and connect them to existing users without modifying the connections among existing users. We formulate the new attack as an optimization problem and utilize a gradient-based method to generate edges of newly added fake nodes. Our extensive experiments show that the attack can not only make new fake nodes evade detection, but also make the detector misclassify most of the target nodes. The proposed new attack is very effective and can achieve up to 100% False Negative Rates (FNRs) for both the new node set and the target node set.
KW - AI security
KW - Adversarial machine learning
KW - adversarial graph-based classification
UR - http://www.scopus.com/inward/record.url?scp=85098093793&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85098093793&partnerID=8YFLogxK
U2 - 10.1145/3427228.3427245
DO - 10.1145/3427228.3427245
M3 - Conference contribution
AN - SCOPUS:85098093793
T3 - ACM International Conference Proceeding Series
SP - 951
EP - 962
BT - Proceedings - 36th Annual Computer Security Applications Conference, ACSAC 2020
T2 - 36th Annual Computer Security Applications Conference, ACSAC 2020
Y2 - 7 December 2020 through 11 December 2020
ER -