Full text

Turn on search term navigation

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Graph convolutional networks are widely used in skeleton-based action recognition because of their good fitting ability to non-Euclidean data. While conventional multi-scale temporal convolution uses several fixed-size convolution kernels or dilation rates at each layer of the network, we argue that different layers and datasets require different receptive fields. We use multi-scale adaptive convolution kernels and dilation rates to optimize traditional multi-scale temporal convolution with a simple and effective self attention mechanism, allowing different network layers to adaptively select convolution kernels of different sizes and dilation rates instead of being fixed and unchanged. Besides, the effective receptive field of the simple residual connection is not large, and there is a great deal of redundancy in the deep residual network, which will lead to the loss of context when aggregating spatio-temporal information. This article introduces a feature fusion mechanism that replaces the residual connection between initial features and temporal module outputs, effectively solving the problems of context aggregation and initial feature fusion. We propose a multi-modality adaptive feature fusion framework (MMAFF) to simultaneously increase the receptive field in both spatial and temporal dimensions. Concretely, we input the features extracted by the spatial module into the adaptive temporal fusion module to simultaneously extract multi-scale skeleton features in both spatial and temporal parts. In addition, based on the current multi-stream approach, we use the limb stream to uniformly process correlated data from multiple modalities. Extensive experiments show that our model obtains competitive results with state-of-the-art methods on the NTU-RGB+D 60 and NTU-RGB+D 120 datasets.

Details

Title
Multi-Modality Adaptive Feature Fusion Graph Convolutional Network for Skeleton-Based Action Recognition
Author
Zhang, Haiping 1 ; Zhang, Xinhao 2   VIAFID ORCID Logo  ; Yu, Dongjin 3 ; Guan, Liming 4 ; Wang, Dongjing 3   VIAFID ORCID Logo  ; Zhou, Fuxing 2 ; Zhang, Wanjun 4   VIAFID ORCID Logo 

 School of Computer Science, Hangzhou Dianzi University, Hangzhou 310005, China; [email protected] (H.Z.); [email protected] (D.Y.); [email protected] (D.W.); School of Information Engineering, Hangzhou Dianzi University, Hangzhou 310005, China; [email protected] 
 School of Electronics and Information, Hangzhou Dianzi University, Hangzhou 310005, China; [email protected] (X.Z.); [email protected] (F.Z.) 
 School of Computer Science, Hangzhou Dianzi University, Hangzhou 310005, China; [email protected] (H.Z.); [email protected] (D.Y.); [email protected] (D.W.) 
 School of Information Engineering, Hangzhou Dianzi University, Hangzhou 310005, China; [email protected] 
First page
5414
Publication year
2023
Publication date
2023
Publisher
MDPI AG
e-ISSN
14248220
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2829881338
Copyright
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.