TY - JOUR
T1 - Learning When to Use Adaptive Adversarial Image Perturbations Against Autonomous Vehicles
AU - Yoon, Hyung Jin
AU - Jafarnejadsani, Hamidreza
AU - Voulgaris, Petros
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2023/7/1
Y1 - 2023/7/1
N2 - Deep neural network (DNN) models are widely used in autonomous vehicles for object detection using camera images. However, these models are vulnerable to adversarial image perturbations. Existing methods for generating these perturbations use the image frame as the decision variable, resulting in a computationally expensive optimization process that starts over for each new image. Few approaches have been developed for attacking online image streams while considering the physical dynamics of autonomous vehicles, their mission, and the environment. To address these challenges, we propose a multi-level stochastic optimization framework that monitors the attacker's capability to generate adversarial perturbations. Our framework introduces a binary decision attack/not attack based on the attacker's capability level to enhance its effectiveness. We evaluate our proposed framework using simulations for vision-guided autonomous vehicles and actual tests with a small indoor drone in an office environment. Our results demonstrate that our method is capable of generating real-time image attacks while monitoring the attacker's proficiency given state estimates.
AB - Deep neural network (DNN) models are widely used in autonomous vehicles for object detection using camera images. However, these models are vulnerable to adversarial image perturbations. Existing methods for generating these perturbations use the image frame as the decision variable, resulting in a computationally expensive optimization process that starts over for each new image. Few approaches have been developed for attacking online image streams while considering the physical dynamics of autonomous vehicles, their mission, and the environment. To address these challenges, we propose a multi-level stochastic optimization framework that monitors the attacker's capability to generate adversarial perturbations. Our framework introduces a binary decision attack/not attack based on the attacker's capability level to enhance its effectiveness. We evaluate our proposed framework using simulations for vision-guided autonomous vehicles and actual tests with a small indoor drone in an office environment. Our results demonstrate that our method is capable of generating real-time image attacks while monitoring the attacker's proficiency given state estimates.
KW - Adversarial machine learning
KW - autonomous vehicle
KW - reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85161009954&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85161009954&partnerID=8YFLogxK
U2 - 10.1109/LRA.2023.3280813
DO - 10.1109/LRA.2023.3280813
M3 - Article
AN - SCOPUS:85161009954
VL - 8
SP - 4179
EP - 4186
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 7
ER -