Attacking Graph-Based Classification without Changing Existing Connections

Xuening Xu, Xiaojiang Du, Qiang Zeng

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

9 Scopus citations

Abstract

In recent years, with the rapid development of machine learning in various domains, more and more studies have shown that machine learning models are vulnerable to adversarial attacks. However, most existing researches on adversarial machine learning study non-graph data, such as images and text. Though some previous works on graph data have shown that adversaries can make graph-based classification methods unreliable by adding perturbations to features or adjacency matrices of existing nodes, these kinds of attacks sometimes have limitations for real-world applications. For example, to launch such attacks in real social networks, the attacker cannot force two good users to change (e.g., remove) the connection between them, which means that the attacker can not launch such attacks. In this paper, we propose a novel attack on collective classification methods by adding fake nodes into existing graphs. Our attack is more realistic and practical than the attack mentioned above. For instance, in a real social network, an attacker only needs to create some fake accounts and connect them to existing users without modifying the connections among existing users. We formulate the new attack as an optimization problem and utilize a gradient-based method to generate edges of newly added fake nodes. Our extensive experiments show that the attack can not only make new fake nodes evade detection, but also make the detector misclassify most of the target nodes. The proposed new attack is very effective and can achieve up to 100% False Negative Rates (FNRs) for both the new node set and the target node set.

Original languageEnglish
Title of host publicationProceedings - 36th Annual Computer Security Applications Conference, ACSAC 2020
Pages951-962
Number of pages12
ISBN (Electronic)9781450388580
DOIs
StatePublished - 7 Dec 2020
Event36th Annual Computer Security Applications Conference, ACSAC 2020 - Virtual, Online, United States
Duration: 7 Dec 202011 Dec 2020

Publication series

NameACM International Conference Proceeding Series

Conference

Conference36th Annual Computer Security Applications Conference, ACSAC 2020
Country/TerritoryUnited States
CityVirtual, Online
Period7/12/2011/12/20

Keywords

  • AI security
  • Adversarial machine learning
  • adversarial graph-based classification

Fingerprint

Dive into the research topics of 'Attacking Graph-Based Classification without Changing Existing Connections'. Together they form a unique fingerprint.

Cite this