Full Text

Turn on search term navigation

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Video stitching technology provides an effective solution for a wide viewing angle monitoring mode for industrial applications. At present, the observation angle of a single camera is limited, and the monitoring network composed of multiple cameras will have many overlapping images captured. Monitoring surveillance cameras can cause the problems of viewing fatigue and low video utilization rate of involved personnel. In addition, current video stitching technology has poor adaptability and real-time performance. We propose an effective hybrid image feature detection method for fast video stitching of mine surveillance video using the effective information of the surveillance video captured from multiple cameras in the actual conditions in the industrial coal mine. The method integrates the Moravec corner point detection and the scale-invariant feature transform (SIFT) feature extractor. After feature extraction, the nearest neighbor method and the random sampling consistency (RANSAC) algorithm are used to register the video frames. The proposed method reduces the image stitching time and solves the problem of feature re-extraction due to the change of observation angle, thus optimizing the entire video stitching process. The experimental results on the real-world underground mine videos show that the optimized stitching method can stitch videos at a speed of 21 fps, effectively meeting the real-time requirement, while the stitching effect has a good stability and applicability in real-world conditions.

Details

Title
Real-Time Video Stitching for Mine Surveillance Using a Hybrid Image Registration Method
Author
Bai, Zongwen 1 ; Li, Ying 2 ; Chen, Xiaohuan 3 ; Yi, Tingting 3 ; Wei, Wei 4 ; Wozniak, Marcin 5   VIAFID ORCID Logo  ; Damasevicius, Robertas 6   VIAFID ORCID Logo 

 School of Computer Science, National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, Shaanxi Provincial Key Laboratory of Speech & Image Information Processing, Northwestern Polytechnical University, Xi’an 710129, China; School of Physics and Electronic Information, Shaanxi Key Laboratory of Intelligent Processing for Big Energy Data, Yan’an University, Yan’an 716000, China; [email protected] (X.C.); [email protected] (T.Y.) 
 School of Computer Science, National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, Shaanxi Provincial Key Laboratory of Speech & Image Information Processing, Northwestern Polytechnical University, Xi’an 710129, China 
 School of Physics and Electronic Information, Shaanxi Key Laboratory of Intelligent Processing for Big Energy Data, Yan’an University, Yan’an 716000, China; [email protected] (X.C.); [email protected] (T.Y.) 
 School of Computer Science and Engineering, Xi’an University of Technology, Xi’an 710048, China; [email protected] 
 Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland; [email protected] (M.W.); [email protected] (R.D.) 
 Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland; [email protected] (M.W.); [email protected] (R.D.); Department of Applied Informatics, Vytautas Magnus University, 44404 Kaunas, Lithuania 
First page
1336
Publication year
2020
Publication date
2020
Publisher
MDPI AG
e-ISSN
20799292
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2436697863
Copyright
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.