Full Text

Turn on search term navigation

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

In recent years, human action recognition has received increasing attention as a significant function of human–machine interaction. The human skeleton is one of the most effective representations of human actions because it is highly compact and informative. Many recent skeleton-based action recognition methods are based on graph convolutional networks (GCNs) as they preserve the topology of the human skeleton while extracting features. Although many of these methods give impressive results, there are some limitations in robustness, interoperability, and scalability. Furthermore, most of these methods ignore the underlying information of view direction and rely on the model to learn how to adjust the view from training data. In this work, we propose VW-SC3D, a spatial–temporal model with view weighting for skeleton-based action recognition. In brief, our model uses a sparse 3D CNN to extract spatial features for each frame and uses a transformer encoder to obtain temporal information within the frames. Compared to GCN-based methods, our method performs better in extracting spatial–temporal features and is more adaptive to different types of 3D skeleton data. The sparse 3D CNN makes our model more computationally efficient and more flexible. In addition, a learnable view weighting module enhances the robustness of the proposed model against viewpoint changes. A test on two different types of datasets shows a competitive result with SOTA methods, and the performance is even better in view-changing situations.

Details

Title
VW-SC3D: A Sparse 3D CNN-Based Spatial–Temporal Network with View Weighting for Skeleton-Based Action Recognition
Author
Lin, Xiaotian 1   VIAFID ORCID Logo  ; Xu, Leiyang 1 ; Zhuang, Songlin 2 ; Wang, Qiang 1   VIAFID ORCID Logo 

 Harbin Institute of Technology, No. 92 Xidazhi Street, Nangang District, Harbin 150001, China 
 Yongjiang Laboratory, Ningbo 315202, China 
First page
117
Publication year
2023
Publication date
2023
Publisher
MDPI AG
e-ISSN
20799292
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2761122587
Copyright
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.