Full Text

Turn on search term navigation

Copyright © 2022 Maochang Zhu et al. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/

Abstract

Three-dimensional convolutional network (3DCNN) is an essential field of motion recognition research. The research work of this paper optimizes the traditional three-dimensional convolution network, introduces the self-attention mechanism, and proposes a new network model to analyze and process complex human motion videos. In this study, the average frame skipping sampling and scaling and the one-hot encoding are used for data pre-processing to retain more features in the limited data. The experimental results show that this paper innovatively designs a lightweight three-dimensional convolutional network combined with an attention mechanism framework, and the number of parameters of the model is reduced by more than 90% to only about 1.7 million. This study compared the performance of different models in different classifications and found that the model proposed in this study performed well in complex human motion video classification. Its recognition rate increased by 1%–8% compared with the C3D model.

Details

Title
Lite-3DCNN Combined with Attention Mechanism for Complex Human Movement Recognition
Author
Zhu, Maochang 1   VIAFID ORCID Logo  ; Sheng Bin 1   VIAFID ORCID Logo  ; Sun, Gengxin 1   VIAFID ORCID Logo 

 College of Computer Science & Technology, Qingdao University, Qingdao 266071, China 
Editor
Ning Cao
Publication year
2022
Publication date
2022
Publisher
John Wiley & Sons, Inc.
ISSN
16875265
e-ISSN
16875273
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2715339387
Copyright
Copyright © 2022 Maochang Zhu et al. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/