Abstract

Action recognition is of great significance in the field of machine vision. In recent years, great progress has been made in bone point-based action recognition models, but there is no much research on weak feature extraction of bone, leading to insufficient generalization of the trained models. This experiment proposes to use the Transformer structure and its attention mechanism to extract image features as input to Transformer to capture their behavior after extraction via GCN. Furthermore, the experiments were optimized based on the original ST-GCN model, introducing an adaptive graph convolutional layer to increase its flexibility and add attention mechanisms to a separate spatiotemporal channel module to further enhance the adaptive graph convolutional layer. Experiments on the NTU-RGBD dataset show that the model shows some improvement in the accuracy of action recognition.

Details

Title
Skeleton Action Recognition Based on Transformer Adaptive Graph Convolution
Author
Meng, Yue 1 ; Shi, Mengqi 1 ; Yang, Wenlu 1 

 Shanghai Maritime University , Shanghai 201306 
First page
012007
Publication year
2022
Publication date
Feb 2022
Publisher
IOP Publishing
ISSN
17426588
e-ISSN
17426596
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2635722664
Copyright
Published under licence by IOP Publishing Ltd. This work is published under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.