The Security of Internet of Vehicles Network: Adversarial Examples for Trajectory Mode Detection

Jing Qiu, Yuanyuan Chen, Zhihong Tian, Nadra Guizani, Xiaojiang Du

Research output: Contribution to journalArticlepeer-review

13 Scopus citations

Abstract

In recent years, the number of vehicles in the city is increasing. However, the increase in the number of cars caused a series of traffic problems, such as road congestion, traffic accidents, environmental pollution, and so on. To handle the problems, Internet of Vehicles (IoV) is emerging. As we know, deep learning has acquired significant success in many fields, and it is also applied in IoV. However, some studies show that deep learning is vulnerable to the crafted samples formed by adding small perturbations to original samples. Thus, the vulnerability of deep learning may pose a huge security threat to IoV. To ensure the security of deep learning, we conduct experiments to investigate whether there are adversarial samples in the IoV field. We try to generate adversarial examples by GPS data. The models we attack are trajectory mode detection models. Since both GPS trajectory data and image data are continuous data, we adopt the algorithm of Computer Vision to generate adversarial examples. We utilize white-box attack algorithms (Fast Gradient Sign Method (FGSM) and Basic Iterative Method (BIM) algorithms) and black-box attack algorithm (One Pixel Attack) to generate adversarial examples. We adopt Dynamic Time Warping (DTW) to measure the similarity between adversarial examples and original trajectory data. Experimental results show that a small perturbation can successfully fool deep neural networks with high confidence.

Original languageEnglish
Pages (from-to)279-283
Number of pages5
JournalIEEE Network
Volume35
Issue number5
DOIs
StatePublished - 1 Sep 2021

Fingerprint

Dive into the research topics of 'The Security of Internet of Vehicles Network: Adversarial Examples for Trajectory Mode Detection'. Together they form a unique fingerprint.

Cite this