Abstract

Existing re-identification (re-ID) methods rely on a large number of cross-camera identity tags for training, and the data annotation process is tedious and time-consuming, resulting in a difficult deployment of real-world re-ID applications. To overcome this problem, we focus on the single camera training (SCT) re-ID setting, where each identity is annotated in a single camera. Since there is no annotation across cameras, it takes much less time in data acquisition, and enables fast deployment in new environments. To address SCT re-ID, we proposed a joint comparison learning framework and split the training data into three parts, single-camera labeled data, pseudo labeled data, and unlabeled instances. In this framework, we iteratively (1) train the network and dynamically update the memory to store the three types of data, (2) assign pseudo-labels to the unlabeled images using a clustering algorithm. In the model training phase, we jointly train the three types of data to update the CNN model, and this joint training method can continuously takes advantages of both labeled, pseudo labeled or unlabeled images. Extensive experiments are conducted on three widely adopted datasets, including Market1501-SCT and MSMT17-SCT, and show the superiority of our method in SCT. Specifically, the mAP of our method significantly outperforms state-of-the-art SCT methods by 42.6% and 30.1%, respectively.

Details

Title
Single Camera Person Re-identification with Self-paced Joint Learning
Author
Zhang, Rumeng 1 ; Li, Mengyao 1 ; Lv, Xueshuai 1 ; Gao, Ling 1 

 School of Information Science and Engineering, Shandong Normal University , Jinan 250014, Shandong Province , China; Institute of Data Science and Technology, Shandong Normal University , Jinan 250014, Shandong Province , China 
First page
012045
Publication year
2023
Publication date
May 2023
Publisher
IOP Publishing
ISSN
17426588
e-ISSN
17426596
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2821390265
Copyright
Published under licence by IOP Publishing Ltd. This work is published under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.