TY - JOUR
T1 - Resource Management for Edge Intelligence (EI)-Assisted IoV Using Quantum-Inspired Reinforcement Learning
AU - Wang, Dan
AU - Song, Bin
AU - Lin, Peng
AU - Yu, F. Richard
AU - Du, Xiaojiang
AU - Guizani, Mohsen
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2022/7/15
Y1 - 2022/7/15
N2 - Recent developments in the Internet of Vehicles (IoV) enable interconnected vehicles to support ubiquitous services. Various emerging service applications are promising to increase the Quality of Experience (QoE) of users. On-board computation tasks generated by these applications have heavily overloaded the resource-constrained vehicles, forcing it to offload on-board tasks to other edge intelligence (EI)-assisted servers. However, excessive task offloading can lead to severe competition for communication and computation resources among vehicles, thereby increasing the processing latency, energy consumption, and system cost. To address these problems, we investigate the transmission-awareness and computing-sense uplink resource management problem and formulate it as a time-varying Markov decision process. Considering the total delay, energy consumption, and cost, quantum-inspired reinforcement learning (QRL) is proposed to develop an intelligence-oriented edge offloading strategy. Specifically, the vehicle can flexibly choose the network access mode and offloading strategy through two different radio interfaces to offload tasks to multiaccess edge computing (MEC) servers through WiFi and cloud servers through 5G. The objective of this joint optimization is to maintain a self-adaptive balance between these two aspects. Simulation results show that the proposed algorithm can significantly reduce the transmission latency and computation delay.
AB - Recent developments in the Internet of Vehicles (IoV) enable interconnected vehicles to support ubiquitous services. Various emerging service applications are promising to increase the Quality of Experience (QoE) of users. On-board computation tasks generated by these applications have heavily overloaded the resource-constrained vehicles, forcing it to offload on-board tasks to other edge intelligence (EI)-assisted servers. However, excessive task offloading can lead to severe competition for communication and computation resources among vehicles, thereby increasing the processing latency, energy consumption, and system cost. To address these problems, we investigate the transmission-awareness and computing-sense uplink resource management problem and formulate it as a time-varying Markov decision process. Considering the total delay, energy consumption, and cost, quantum-inspired reinforcement learning (QRL) is proposed to develop an intelligence-oriented edge offloading strategy. Specifically, the vehicle can flexibly choose the network access mode and offloading strategy through two different radio interfaces to offload tasks to multiaccess edge computing (MEC) servers through WiFi and cloud servers through 5G. The objective of this joint optimization is to maintain a self-adaptive balance between these two aspects. Simulation results show that the proposed algorithm can significantly reduce the transmission latency and computation delay.
KW - Cloud computing
KW - Internet of Vehicles (IoV)
KW - edge intelligence (EI)
KW - multiaccess edge computing (MEC)
KW - quantum-inspired reinforcement learning (QRL)
UR - http://www.scopus.com/inward/record.url?scp=85122077843&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85122077843&partnerID=8YFLogxK
U2 - 10.1109/JIOT.2021.3137984
DO - 10.1109/JIOT.2021.3137984
M3 - Article
AN - SCOPUS:85122077843
VL - 9
SP - 12588
EP - 12600
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 14
ER -