ViT-MAE Based Foundation Model for Automatic Modulation Classification

Jikui Zhao, Qi Cheng, Huaxia Wang, Yu Dong Yao

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Foundation models represented by ChatGPT, have initiated an outstanding revolution across various domains. With the pre-trained foundation model in specific fields, numerous downstream tasks exhibit state-of-the-art performances. This paper extends this paradigm to automatic modulation classification (AMC), employing a masked autoencoder vision transformer (ViT-MAE) as a foundation model to advance AMC. The experimental results show that our signal constellation diagram-based foundation model outperforms traditional deep learning methodologies, underscoring the vast potential of foundation models in AMC and wireless communication systems.

Original languageEnglish
Title of host publication2024 33rd Wireless and Optical Communications Conference, WOCC 2024
Pages50-54
Number of pages5
ISBN (Electronic)9798331539658
DOIs
StatePublished - 2024
Event33rd Wireless and Optical Communications Conference, WOCC 2024 - Hsinchu, Taiwan, Province of China
Duration: 25 Oct 202426 Oct 2024

Publication series

Name2024 33rd Wireless and Optical Communications Conference, WOCC 2024

Conference

Conference33rd Wireless and Optical Communications Conference, WOCC 2024
Country/TerritoryTaiwan, Province of China
CityHsinchu
Period25/10/2426/10/24

Keywords

  • Automatic Modulation Recognition
  • foundation Model
  • signal constellation
  • Vision Transformer

Fingerprint

Dive into the research topics of 'ViT-MAE Based Foundation Model for Automatic Modulation Classification'. Together they form a unique fingerprint.

Cite this