Abstract

To address the current difficulties in fire detection algorithms, including inadequate feature extraction, excessive computational complexity, limited deployment on devices with limited resources, missed detections, inaccurate detections, and low accuracy, we developed a highly accurate algorithm named YOLOFM. We utilized LabelImg software to manually label a dataset containing 18644 images, named FM-VOC Dataset18644. In addition, we constructed a FocalNext network, which utilized the FocalNextBlock module from the CFnet network. This improves the integration of multi-scale information and reduces model parameters. We also proposed QAHARep-FPN, an FPN network that integrates the structure of quantization awareness and hardware awareness. This design effectively reduces redundant calculations of the model. A brand-new compression decoupled head, named NADH, was also created to enhance the correlation between the decoupling head structure and the calculation logic of the loss function. Instead of using the CIoU loss for bounding box regression, we proposed a Focal-SIoU loss. This promotes the swift convergence of the network and enhances the precision of the regression. The experimental results showed that YOLOFM improved the baseline network’s accuracy, recall, F1, mAP50, and mAP50-95 by 3.1%, 3.9%, 3.0%, 2.2%, and 7.9%, respectively. It achieves an equilibrium that combines performance and speed, resulting in a more dependable and accurate solution for detection jobs.

Details

Title
YOLOFM: an improved fire and smoke object detection algorithm based on YOLOv5n
Author
Geng, Xin 1 ; Su, Yixuan 1 ; Cao, Xianghong 1 ; Li, Huaizhou 1 ; Liu, Linggong 1 

 Zhengzhou University of Light Industry, College of Building Environment Engineering, Zhengzhou, China (GRID:grid.413080.e) (ISNI:0000 0001 0476 2801) 
Pages
4543
Publication year
2024
Publication date
2024
Publisher
Nature Publishing Group
e-ISSN
20452322
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2931205095
Copyright
© The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.