TY - JOUR
T1 - Joint resource allocation and power control for D2D communication with deep reinforcement learning in MCC
AU - Wang, Dan
AU - Qin, Hao
AU - Song, Bin
AU - Xu, Ke
AU - Du, Xiaojiang
AU - Guizani, Mohsen
N1 - Publisher Copyright:
© 2020 The Authors
PY - 2021/4
Y1 - 2021/4
N2 - Mission-critical communication (MCC) is one of the main goals in 5G, which can leverage multiple device-to-device (D2D) connections to enhance reliability for mission-critical communication. In MCC, D2D users can reuses the non-orthogonal wireless resources of cellular users without a base station (BS). Meanwhile, the D2D users will generate co-channel interference to cellular users and hence affect their quality-of-service (QoS). To comprehensively improve the user experience, we proposed a novel approach, which embraces resource allocation and power control along with Deep Reinforcement Learning (DRL). In this paper, multiple procedures are carefully designed to assist in developing our proposal. As a starter, a scenario with multiple D2D pairs and cellular users in a cell will be modeled; followed by the analysis of issues pertaining to resource allocation and power control as well as the formulation of our optimization goal; and finally, a DRL method based on spectrum allocation strategy will be created, which can ensure D2D users to obtain the sufficient resource for their QoS improvement. With the resource data provided, which D2D users capture by interacting with surroundings, the DRL method can help the D2D users autonomously selecting an available channel and power to maximize system capacity and spectrum efficiency while minimizing interference to cellular users. Experimental results show that our learning method performs well to improve resource allocation and power control significantly.
AB - Mission-critical communication (MCC) is one of the main goals in 5G, which can leverage multiple device-to-device (D2D) connections to enhance reliability for mission-critical communication. In MCC, D2D users can reuses the non-orthogonal wireless resources of cellular users without a base station (BS). Meanwhile, the D2D users will generate co-channel interference to cellular users and hence affect their quality-of-service (QoS). To comprehensively improve the user experience, we proposed a novel approach, which embraces resource allocation and power control along with Deep Reinforcement Learning (DRL). In this paper, multiple procedures are carefully designed to assist in developing our proposal. As a starter, a scenario with multiple D2D pairs and cellular users in a cell will be modeled; followed by the analysis of issues pertaining to resource allocation and power control as well as the formulation of our optimization goal; and finally, a DRL method based on spectrum allocation strategy will be created, which can ensure D2D users to obtain the sufficient resource for their QoS improvement. With the resource data provided, which D2D users capture by interacting with surroundings, the DRL method can help the D2D users autonomously selecting an available channel and power to maximize system capacity and spectrum efficiency while minimizing interference to cellular users. Experimental results show that our learning method performs well to improve resource allocation and power control significantly.
KW - D2D
KW - DRL
KW - Power control
KW - Resource allocation
UR - http://www.scopus.com/inward/record.url?scp=85098740731&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85098740731&partnerID=8YFLogxK
U2 - 10.1016/j.phycom.2020.101262
DO - 10.1016/j.phycom.2020.101262
M3 - Article
AN - SCOPUS:85098740731
SN - 1874-4907
VL - 45
JO - Physical Communication
JF - Physical Communication
M1 - 101262
ER -