Full text

Turn on search term navigation

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Person tracking in hazardous goods factories can provide a significant improvement in security and safety. This article proposes a face verification model which can be used to record travel paths for staff or related persons in the factory. As face images are captured from the dynamic crowd at entrance–exit gates of workshops, face verification is challenged by polymorphic faces, poor illumination and changing of a person’s pose. To adapt to this situation, a new face verification model is proposed, which is composed of two advanced deep learning neural network models. Firstly, MTCNN (Multi-Task Cascaded Convolutional Neural Network) is used to construct a face detector. Based on the SphereFace-20 network model, we have reconstructed a convolutional network architecture with the embedded Batch Normalization elements and the optimized network parameters. The new model, which is called the MDCNN, is used to extract efficient face features. A set of specific processing algorithms is used in the model to process polymorphic face images. The multi-view faces and various types of face images are used to train the models. The experimental results have demonstrated that the proposed model outperforms most existing methods on benchmark datasets such as the Labeled Faces in the Wild (LFW) and YouTube Face (YTF) datasets without multi-view (accuracy is 99.38% and 94.30%, respectively) and the CNBC/FERET datasets with multi-view (accuracy is 94.69%).

Details

Title
Face Verification Based on Deep Learning for Person Tracking in Hazardous Goods Factories
Author
Huang, Xixian 1 ; Zeng, Xiongjun 1 ; Wu, Qingxiang 2   VIAFID ORCID Logo  ; Lu, Yu 3 ; Huang, Xi 1 ; Zheng, Hua 1 

 Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, College of Photonic and Electronic Engineering, Fujian Normal University, Fuzhou 350108, China; [email protected] (X.H.); [email protected] (X.Z.); [email protected] (X.H.); [email protected] (H.Z.) 
 Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, College of Photonic and Electronic Engineering, Fujian Normal University, Fuzhou 350108, China; [email protected] (X.H.); [email protected] (X.Z.); [email protected] (X.H.); [email protected] (H.Z.); Concord University College, Fujian Normal University, Fuzhou 350117, China; [email protected] 
 Concord University College, Fujian Normal University, Fuzhou 350117, China; [email protected] 
First page
380
Publication year
2022
Publication date
2022
Publisher
MDPI AG
e-ISSN
22279717
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2633052521
Copyright
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.