Full text

Turn on search term navigation

© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

The timely and accurate recognition of multi-type structural surface damage (e.g., cracks, spalling, corrosion, etc.) is vital for ensuring the structural safety and service performance of civil infrastructure and for accomplishing the intelligent maintenance of smart cities. Deep learning and computer vision have made profound impacts on automatic structural damage recognition using nondestructive test techniques, especially non-contact vision-based algorithms. However, the recognition accuracy highly depends on the training data volume and damage completeness in the conventional supervised learning pipeline, which significantly limits the model performance under actual application scenarios; the model performance and stability for multi-type structural damage categories are still challenging. To address the above issues, this study proposes a dual-stage optimization-based few-shot learning segmentation method using only a few images with supervised information for multi-type structural damage recognition. A dual-stage optimization paradigm is established encompassing an internal network optimization based on meta-task and an external meta-learning machine optimization based on meta-batch. The underlying image features pertinent to various structural damage types are learned as prior knowledge to expedite adaptability across diverse damage categories via only a few samples. Furthermore, a mathematical framework of optimization-based few-shot learning is formulated to intuitively express the perception mechanism. Comparative experiments are conducted to verify the effectiveness and necessity of the proposed method on a small-scale multi-type structural damage image set. The results show that the proposed method could achieve higher segmentation accuracies for various types of structural damage than directly training the original image segmentation network. In addition, the generalization ability for the unseen structural damage category is also validated. The proposed method provides an effective solution to achieve image-based structural damage recognition with high accuracy and robustness for bridges and buildings, which assists the unmanned intelligent inspection of civil infrastructure using drones and robotics in smart cities.

Details

Title
Multi-Type Structural Damage Image Segmentation via Dual-Stage Optimization-Based Few-Shot Learning
Author
Zhong, Jiwei 1 ; Fan, Yunlei 2 ; Zhao, Xungang 3 ; Zhou, Qiang 3 ; Xu, Yang 4   VIAFID ORCID Logo 

 National Key Laboratory of Bridge Intelligence and Green Construction, Wuhan 430034, China; School of Civil Engineering and Architecture, Wuhan University of Technology, Wuhan 430070, China 
 School of Civil Engineering, Harbin Institute of Technology, Harbin 150090, China 
 National Key Laboratory of Bridge Intelligence and Green Construction, Wuhan 430034, China 
 School of Civil Engineering, Harbin Institute of Technology, Harbin 150090, China; Key Lab of Smart Prevention and Mitigation of Civil Engineering Disasters of the Ministry of Industry and Information Technology, Harbin Institute of Technology, Harbin 150090, China; Key Lab of Structures Dynamics Behavior and Control of the Ministry of Education, Harbin Institute of Technology, Harbin 150090, China 
First page
1888
Publication year
2024
Publication date
2024
Publisher
MDPI AG
e-ISSN
26246511
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3098191315
Copyright
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.