TY - JOUR
T1 - TSARDC-Net
T2 - A Temporal–Spatial Anisotropic Network for Human Activity Recognition Using 3-D Radar Data Cube
AU - Bao, Nan
AU - Li, Zhikun
AU - Hou, Jinfei
AU - Guo, Yanyan
AU - Qian, Wei
AU - Yao, Yudong
AU - Greenwald, Stephen E.
AU - Xu, Lisheng
N1 - Publisher Copyright:
© 1963-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - The application of the 3-D radar data cube (RDC), which integrates time, distance, and Doppler frequency information for accurate human activity recognition (HAR), has attracted much recent research interest in the field of smart healthcare. However, existing methods often fail to fully exploit the temporal–spatial characteristics and the anisotropic nature of the RDC, limiting their performance in HAR. To address these limitations, we propose a new temporal–spatial anisotropic RDC network (TSARDC-Net) for HAR. This network utilizes a convolutional neural network–long short-term memory (CNN–LSTM) architecture to simultaneously extract spatial and temporal features from radar signals, aiming to obtain joint modeling of the temporal–spatial characteristics of human motion. We adopted a unique anisotropic multiscale convolution (AMSC) module to address the anisotropic spatial distribution characteristics of the RDC and enhance feature extraction capability. We also introduced squeeze-and-excitation normalization (SENM) to adjust the learned features, thereby improving the model’s ability to recognize action features. Furthermore, considering practical deployment requirements, we explored a lightweight strategy based on separable convolutions. We used a public dataset that includes 1754 samples, recording six different human activities. In addition, we recruited a group of volunteers using an off-the-shelf Wi-Fi radar device and obtained a dataset containing 2148 samples of five different activities. TSARDC-Nets were trained separately on these two datasets. Experimental results show that, on the public dataset, the proposed method achieves a classification accuracy of 98.58%, outperforming existing methods. Additionally, the proposed method achieves an accuracy of 95.57% on our dataset, showing good generalization capability.
AB - The application of the 3-D radar data cube (RDC), which integrates time, distance, and Doppler frequency information for accurate human activity recognition (HAR), has attracted much recent research interest in the field of smart healthcare. However, existing methods often fail to fully exploit the temporal–spatial characteristics and the anisotropic nature of the RDC, limiting their performance in HAR. To address these limitations, we propose a new temporal–spatial anisotropic RDC network (TSARDC-Net) for HAR. This network utilizes a convolutional neural network–long short-term memory (CNN–LSTM) architecture to simultaneously extract spatial and temporal features from radar signals, aiming to obtain joint modeling of the temporal–spatial characteristics of human motion. We adopted a unique anisotropic multiscale convolution (AMSC) module to address the anisotropic spatial distribution characteristics of the RDC and enhance feature extraction capability. We also introduced squeeze-and-excitation normalization (SENM) to adjust the learned features, thereby improving the model’s ability to recognize action features. Furthermore, considering practical deployment requirements, we explored a lightweight strategy based on separable convolutions. We used a public dataset that includes 1754 samples, recording six different human activities. In addition, we recruited a group of volunteers using an off-the-shelf Wi-Fi radar device and obtained a dataset containing 2148 samples of five different activities. TSARDC-Nets were trained separately on these two datasets. Experimental results show that, on the public dataset, the proposed method achieves a classification accuracy of 98.58%, outperforming existing methods. Additionally, the proposed method achieves an accuracy of 95.57% on our dataset, showing good generalization capability.
KW - Anisotropic multiscale convolution (AMSC)
KW - deep learning (DL)
KW - human activity recognition (HAR)
KW - radar data cube (RDC)
KW - temporal–spatial feature extraction
UR - https://www.scopus.com/pages/publications/105018836413
UR - https://www.scopus.com/pages/publications/105018836413#tab=citedBy
U2 - 10.1109/TIM.2025.3612626
DO - 10.1109/TIM.2025.3612626
M3 - Article
AN - SCOPUS:105018836413
SN - 0018-9456
VL - 74
JO - IEEE Transactions on Instrumentation and Measurement
JF - IEEE Transactions on Instrumentation and Measurement
M1 - 2548016
ER -