Abstract

The existing trackers based on the Siamese network have poor tracking robustness in the face of complex situations such as target occlusion, rapid target movement and scale changes. To address this problem, we propose a new tracking framework based on the Siamese network with multi-scale fusion attention. The algorithm proposed in this paper increases the receptive field of the last two layers of the backbone network, strengthens the ability to capture target information, and outputs the last three layers of the backbone for feature fusion. We add an improved visual attention model, and the multi-scale channel attention model is used for the first time in this work, so as to strengthen the long-range dependence of feature information and the learning of channel attention, so that the network can select better salient features of the target. In this paper, to avoid to generate complex hyper-parameters for the target candidate frame and speeds up the training speed of the network, we introduce an anchor-free classification and regression network model. The experimental evaluation conducted on the OTB100 and VOT2016 datasets shows that the algorithm in this paper has good robustness in the face of challenges such as target occlusion, rapid target movement and scale changes, and effectively improves the accuracy of our algorithm.

Details

Title
Siamese Network with multi-scale fusion attention for Visual Tracking
Author
Xue Shangjie 1 ; Yao Wenjin 1 ; Yang, Wenjun 1 

 School of Mechanical Engineering, Nanjing University of Science and Technology(NJUST) , NanJing, Jiangsu, 210094 , China 
First page
062031
Publication year
2023
Publication date
Jun 2023
Publisher
IOP Publishing
ISSN
17426588
e-ISSN
17426596
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2828907066
Copyright
Published under licence by IOP Publishing Ltd. This work is published under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.