TY - GEN
T1 - NMformer
T2 - 33rd Wireless and Optical Communications Conference, WOCC 2024
AU - Faysal, Atik
AU - Rostami, Mohammad
AU - Roshan, Reihaneh Gh
AU - Wang, Huaxia
AU - Muralidhar, Nikhil
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Modulation classification is a very challenging task since the signals intertwine with various ambient noises. Methods are required that can classify them without adding extra steps like denoising, which introduces computational complexity. In this study, we propose a vision transformer (ViT) based model named NMformer to predict the channel modulation images with different noise levels in wireless communication. Since ViTs are most effective for RGB images, we generated constellation diagrams from the modulated signals. The diagrams provide the information from the signals in a 2-D representation form. We trained NMformer on 106, 800 modulation images to build the base classifier and only used 3,000 images to fine-tune for specific tasks. Our proposed model has two different kinds of prediction setups: in-distribution and out-of-distribution. Our model achieves 4.67% higher accuracy than the base classifier when finetuned and tested on high signal-to-noise ratios (SNRs) in-distribution classes. Moreover, the fine-tuned low SNR task achieves a higher accuracy than the base classifier. The fine-tuned classifier becomes much more effective than the base classifier by achieving higher accuracy when predicted, even on unseen data from out-of-distribution classes. Extensive experiments show the effectiveness of NMformer for a wide range of SNRs.
AB - Modulation classification is a very challenging task since the signals intertwine with various ambient noises. Methods are required that can classify them without adding extra steps like denoising, which introduces computational complexity. In this study, we propose a vision transformer (ViT) based model named NMformer to predict the channel modulation images with different noise levels in wireless communication. Since ViTs are most effective for RGB images, we generated constellation diagrams from the modulated signals. The diagrams provide the information from the signals in a 2-D representation form. We trained NMformer on 106, 800 modulation images to build the base classifier and only used 3,000 images to fine-tune for specific tasks. Our proposed model has two different kinds of prediction setups: in-distribution and out-of-distribution. Our model achieves 4.67% higher accuracy than the base classifier when finetuned and tested on high signal-to-noise ratios (SNRs) in-distribution classes. Moreover, the fine-tuned low SNR task achieves a higher accuracy than the base classifier. The fine-tuned classifier becomes much more effective than the base classifier by achieving higher accuracy when predicted, even on unseen data from out-of-distribution classes. Extensive experiments show the effectiveness of NMformer for a wide range of SNRs.
KW - classification
KW - constellation diagrams
KW - modulation classification
KW - transformer
UR - http://www.scopus.com/inward/record.url?scp=85215709604&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85215709604&partnerID=8YFLogxK
U2 - 10.1109/WOCC61718.2024.10786062
DO - 10.1109/WOCC61718.2024.10786062
M3 - Conference contribution
AN - SCOPUS:85215709604
T3 - 2024 33rd Wireless and Optical Communications Conference, WOCC 2024
SP - 103
EP - 108
BT - 2024 33rd Wireless and Optical Communications Conference, WOCC 2024
Y2 - 25 October 2024 through 26 October 2024
ER -