TY - JOUR
T1 - Low-Latency Communications for Community Resilience Microgrids
T2 - A Reinforcement Learning Approach
AU - Elsayed, Medhat
AU - Erol-Kantarci, Melike
AU - Kantarci, Burak
AU - Wu, Lei
AU - Li, Jie
N1 - Publisher Copyright:
© 2010-2012 IEEE.
PY - 2020/3
Y1 - 2020/3
N2 - Machine learning and artificial intelligence (AI) techniques can play a key role in resource allocation and scheduler design in wireless networks that target applications with stringent QoS requirements, such as near real-time control of community resilience microgrids (CRMs). Specifically, for integrated control and communication of multiple CRMs, a large number of microgrid devices need to coexist with traditional mobile user equipments (UEs), which are usually served with self-organized and densified wireless networks with many small cell base stations (SBSs). In such cases, rapid propagation of messages becomes challenging. This calls for a design of efficient resource allocation and user scheduling for delay minimization. In this paper, we introduce a resource allocation algorithm, namely, delay minimization Q-learning (DMQ) scheme, which learns the efficient resource allocation for both the macro cell base stations (eNB) and the SBSs using reinforcement learning at each time-to-transmit interval (TTI). Comparison with the traditional proportional fairness (PF) algorithm and an optimization-based algorithm, namely distributed iterative resource allocation (DIRA) reveals that our scheme can achieve 66% and 33% less latency, respectively. Moreover, DMQ outperforms DIRA, and PF in terms of throughput while achieving the highest fairness.
AB - Machine learning and artificial intelligence (AI) techniques can play a key role in resource allocation and scheduler design in wireless networks that target applications with stringent QoS requirements, such as near real-time control of community resilience microgrids (CRMs). Specifically, for integrated control and communication of multiple CRMs, a large number of microgrid devices need to coexist with traditional mobile user equipments (UEs), which are usually served with self-organized and densified wireless networks with many small cell base stations (SBSs). In such cases, rapid propagation of messages becomes challenging. This calls for a design of efficient resource allocation and user scheduling for delay minimization. In this paper, we introduce a resource allocation algorithm, namely, delay minimization Q-learning (DMQ) scheme, which learns the efficient resource allocation for both the macro cell base stations (eNB) and the SBSs using reinforcement learning at each time-to-transmit interval (TTI). Comparison with the traditional proportional fairness (PF) algorithm and an optimization-based algorithm, namely distributed iterative resource allocation (DIRA) reveals that our scheme can achieve 66% and 33% less latency, respectively. Moreover, DMQ outperforms DIRA, and PF in terms of throughput while achieving the highest fairness.
KW - Community resilience microgrid
KW - low-latency communications
KW - reinforcement learning
KW - resource allocation
KW - small cells
KW - smart grid
UR - http://www.scopus.com/inward/record.url?scp=85081156412&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85081156412&partnerID=8YFLogxK
U2 - 10.1109/TSG.2019.2931753
DO - 10.1109/TSG.2019.2931753
M3 - Article
AN - SCOPUS:85081156412
SN - 1949-3053
VL - 11
SP - 1091
EP - 1099
JO - IEEE Transactions on Smart Grid
JF - IEEE Transactions on Smart Grid
IS - 2
M1 - 8781859
ER -