TY - JOUR
T1 - Artificial Intelligence Security in 5G Networks
T2 - Adversarial Examples for Estimating a Travel Time Task
AU - Qiu, Jing
AU - Du, Lei
AU - Chen, Yuanyuan
AU - Tian, Zhihong
AU - Du, Xiaojiang
AU - Guizani, Mohsen
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/9
Y1 - 2020/9
N2 - With the rapid development of the Internet, the nextgeneration network (5G) has emerged. 5G can support a variety of new applications, such as the Internet of Things (IoT), virtual reality (VR), and the Internet of Vehicles. Most of these new applications depend on deep learning algorithms, which have made great advances in many areas of artificial intelligence (AI). However, researchers have found that AI algorithms based on deep learning pose numerous security problems. For example, deep learning is susceptible to a well-designed input sample formed by adding small perturbations to the original sample. This well-designed input with small perturbations, which are imperceptible to humans, is called an adversarial example. An adversarial example is similar to a truth example, but it can render the deep learning model invalid. In this article, we generate adversarial examples for spatiotemporal data. Based on the travel time estimation (TTE) task, we use two methods-white-box and blackbox attacks-to invalidate deep learning models. Experiment results show that the adversarial examples successfully attack the deep learning model and thus that AI security is a big challenge of 5G.
AB - With the rapid development of the Internet, the nextgeneration network (5G) has emerged. 5G can support a variety of new applications, such as the Internet of Things (IoT), virtual reality (VR), and the Internet of Vehicles. Most of these new applications depend on deep learning algorithms, which have made great advances in many areas of artificial intelligence (AI). However, researchers have found that AI algorithms based on deep learning pose numerous security problems. For example, deep learning is susceptible to a well-designed input sample formed by adding small perturbations to the original sample. This well-designed input with small perturbations, which are imperceptible to humans, is called an adversarial example. An adversarial example is similar to a truth example, but it can render the deep learning model invalid. In this article, we generate adversarial examples for spatiotemporal data. Based on the travel time estimation (TTE) task, we use two methods-white-box and blackbox attacks-to invalidate deep learning models. Experiment results show that the adversarial examples successfully attack the deep learning model and thus that AI security is a big challenge of 5G.
UR - http://www.scopus.com/inward/record.url?scp=85090204345&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85090204345&partnerID=8YFLogxK
U2 - 10.1109/MVT.2020.3002487
DO - 10.1109/MVT.2020.3002487
M3 - Article
AN - SCOPUS:85090204345
SN - 1556-6072
VL - 15
SP - 95
EP - 100
JO - IEEE Vehicular Technology Magazine
JF - IEEE Vehicular Technology Magazine
IS - 3
M1 - 9137684
ER -