Artificial Intelligence Security in 5G Networks: Adversarial Examples for Estimating a Travel Time Task

Jing Qiu, Lei Du, Yuanyuan Chen, Zhihong Tian, Xiaojiang Du, Mohsen Guizani

Research output: Contribution to journalArticlepeer-review

26 Scopus citations

Abstract

With the rapid development of the Internet, the nextgeneration network (5G) has emerged. 5G can support a variety of new applications, such as the Internet of Things (IoT), virtual reality (VR), and the Internet of Vehicles. Most of these new applications depend on deep learning algorithms, which have made great advances in many areas of artificial intelligence (AI). However, researchers have found that AI algorithms based on deep learning pose numerous security problems. For example, deep learning is susceptible to a well-designed input sample formed by adding small perturbations to the original sample. This well-designed input with small perturbations, which are imperceptible to humans, is called an adversarial example. An adversarial example is similar to a truth example, but it can render the deep learning model invalid. In this article, we generate adversarial examples for spatiotemporal data. Based on the travel time estimation (TTE) task, we use two methods-white-box and blackbox attacks-to invalidate deep learning models. Experiment results show that the adversarial examples successfully attack the deep learning model and thus that AI security is a big challenge of 5G.

Original languageEnglish
Article number9137684
Pages (from-to)95-100
Number of pages6
JournalIEEE Vehicular Technology Magazine
Volume15
Issue number3
DOIs
StatePublished - Sep 2020

Fingerprint

Dive into the research topics of 'Artificial Intelligence Security in 5G Networks: Adversarial Examples for Estimating a Travel Time Task'. Together they form a unique fingerprint.

Cite this