TY - JOUR
T1 - Reinforcement Learning Power Control Algorithm Based on Graph Signal Processing for Ultra-Dense Mobile Networks
AU - Li, Yujie
AU - Tang, Zhoujin
AU - Lin, Zhijian
AU - Gong, Yanfei
AU - Du, Xiaojiang
AU - Guizani, Mohsen
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2021/7/1
Y1 - 2021/7/1
N2 - Ultra-dense mobile networks (UDMNs) represent a promising technology for improving the network performance and providing the ubiquitous network accessibility in the beyond 5 G (B5G) mobile networks. Heterogenous densely deployed networks can dynamically offer high spectrum efficiency and enhance frequency reuse, which ultimately improves quality of service (QoS) and the user experience. However, mass inter-or intra-cell interference generated from overlap between small cells greatly limits network performance, especially when there is mobility between UEs and access points (APs). Even so, when network density increases, the complexity of conventional allocation methods can increase also. In this paper, we investigate a power control of downlink (DL) connection in the UNMNs with different types of APs. We propose a reinforcement learning (RL) power allocation algorithm based on graph signal processing (GSP) for ultra-dense mobile networks. Firstly, we construct a realistic system model under ultra-dense mobile networking, which includes the system channel mode and instantaneous rate. Then we employ a GSP tool to analyze network interference, the interference analysis results for the entire network are obtained to determine optimal RL power allocation. Finally, simulation results indicate that the proposed RL power control algorithm outperforms baseline algorithms when applied to a ultra-dense mobile networks.
AB - Ultra-dense mobile networks (UDMNs) represent a promising technology for improving the network performance and providing the ubiquitous network accessibility in the beyond 5 G (B5G) mobile networks. Heterogenous densely deployed networks can dynamically offer high spectrum efficiency and enhance frequency reuse, which ultimately improves quality of service (QoS) and the user experience. However, mass inter-or intra-cell interference generated from overlap between small cells greatly limits network performance, especially when there is mobility between UEs and access points (APs). Even so, when network density increases, the complexity of conventional allocation methods can increase also. In this paper, we investigate a power control of downlink (DL) connection in the UNMNs with different types of APs. We propose a reinforcement learning (RL) power allocation algorithm based on graph signal processing (GSP) for ultra-dense mobile networks. Firstly, we construct a realistic system model under ultra-dense mobile networking, which includes the system channel mode and instantaneous rate. Then we employ a GSP tool to analyze network interference, the interference analysis results for the entire network are obtained to determine optimal RL power allocation. Finally, simulation results indicate that the proposed RL power control algorithm outperforms baseline algorithms when applied to a ultra-dense mobile networks.
KW - B5G
KW - graph signal processing
KW - power control
KW - reinforcement learning
KW - ultra-dense mobile networks.
UR - http://www.scopus.com/inward/record.url?scp=85099725171&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85099725171&partnerID=8YFLogxK
U2 - 10.1109/TNSE.2021.3051660
DO - 10.1109/TNSE.2021.3051660
M3 - Article
AN - SCOPUS:85099725171
VL - 8
SP - 2694
EP - 2705
JO - IEEE Transactions on Network Science and Engineering
JF - IEEE Transactions on Network Science and Engineering
IS - 3
ER -