TY - GEN
T1 - Precision Beamforming for SAGIN Networks
T2 - 2024 IEEE Microwaves, Antennas, and Propagation Conference, MAPCON 2024
AU - Ananthakrishnan, Arushi
AU - Rajesh, Akshaya
AU - Arya, Sudhanshu
AU - Sandhana Mahalingam, M.
AU - Wang, Ying
AU - Pandeeswari, R.
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - In this study, we explore a novel approach to beamforming in a Space-Air-Ground Integrated Network (SAGIN) by utilizing a single unmanned aerial vehicle (UAV) equipped with a mounted antenna array. The UAV dynamically adjusts its beamforming weights in real-time to address the challenges posed by UAV hovering, which can impair the beam's performance. We propose a model-free reinforcement learning (RL) framework, integrated with a neural network, to predict the optimal beamforming weights and enhance the received signal-to-noise ratio (SNR). Traditional RL methods face difficulties with continuous action spaces, as they require explicit representation and updating of Q-values for each possible action, which is infeasible in complex scenarios like UAV beamforming. Our approach leverages deep reinforcement learning (DRL) to learn high-level decision-making strategies, demonstrating significant improvements in beamforming efficiency and energy utilization. The DRL framework showcases remarkable performance in mitigating beam distortion and optimizing SNR, thus advancing the state-of-the-art in UAV-assisted mmWave communications within SAGIN networks.
AB - In this study, we explore a novel approach to beamforming in a Space-Air-Ground Integrated Network (SAGIN) by utilizing a single unmanned aerial vehicle (UAV) equipped with a mounted antenna array. The UAV dynamically adjusts its beamforming weights in real-time to address the challenges posed by UAV hovering, which can impair the beam's performance. We propose a model-free reinforcement learning (RL) framework, integrated with a neural network, to predict the optimal beamforming weights and enhance the received signal-to-noise ratio (SNR). Traditional RL methods face difficulties with continuous action spaces, as they require explicit representation and updating of Q-values for each possible action, which is infeasible in complex scenarios like UAV beamforming. Our approach leverages deep reinforcement learning (DRL) to learn high-level decision-making strategies, demonstrating significant improvements in beamforming efficiency and energy utilization. The DRL framework showcases remarkable performance in mitigating beam distortion and optimizing SNR, thus advancing the state-of-the-art in UAV-assisted mmWave communications within SAGIN networks.
KW - Beamforming
KW - reinforcement learning
KW - SAGIN
KW - UAV
UR - http://www.scopus.com/inward/record.url?scp=105001866167&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=105001866167&partnerID=8YFLogxK
U2 - 10.1109/MAPCON61407.2024.10923365
DO - 10.1109/MAPCON61407.2024.10923365
M3 - Conference contribution
AN - SCOPUS:105001866167
T3 - 2024 IEEE Microwaves, Antennas, and Propagation Conference, MAPCON 2024
BT - 2024 IEEE Microwaves, Antennas, and Propagation Conference, MAPCON 2024
Y2 - 9 December 2024 through 13 December 2024
ER -