Full Text

Turn on search term navigation

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Large-scale optical sensing and precise, rapid assessment of seismic building damage in urban communities are increasingly demanded in disaster prevention and reduction. The common method is to train a convolutional neural network (CNN) in a pixel-level semantic segmentation approach and does not fully consider the characteristics of the assessment objectives. This study developed a machine-learning-derived two-stage method for post-earthquake building location and damage assessment considering the data characteristics of satellite remote sensing (SRS) optical images with dense distribution, small size, and imbalanced numbers. It included a modified You Only Look Once (YOLOv4) object detection module and a support vector machine (SVM) based classification module. In the primary step, the multiscale features were successfully extracted and fused from SRS images of densely distributed buildings by optimizing the YOLOv4 model toward the network structures, training hyperparameters, and anchor boxes. The fusion improved multi-channel features, optimization of network structure and hyperparameters have significantly enhanced the average location accuracy of post-earthquake buildings. Thereafter, three statistics (i.e., the angular second moment, dissimilarity, and inverse difference moment) were further discovered to effectively extract the characteristic value for earthquake damage from located buildings in SRS optical images based on the gray level co-occurrence matrix. They were used as the texture features to distinguish damage intensities of buildings, using the SVM model. The investigated dataset included 386 pre- and post-earthquake SRS optical images of the 2017 Mexico City earthquake, with a resolution of 1024 × 1024 pixels. Results show that the average location accuracy of post-earthquake buildings exceeds 95.7% and that the binary classification accuracy for damage assessment reaches 97.1%. The proposed two-stage method was validated by its extremely high precision in respect of densely distributed small buildings, indicating the promising potential of computer vision in large-scale disaster prevention and reduction using SRS datasets.

Details

Title
A Two-Stage Seismic Damage Assessment Method for Small, Dense, and Imbalanced Buildings in Remote Sensing Images
Author
Wang, Yu 1 ; Cui, Liangyi 1 ; Zhang, Chenzong 1 ; Chen, Wenli 2   VIAFID ORCID Logo  ; Xu, Yang 3   VIAFID ORCID Logo  ; Zhang, Qiangqiang 1   VIAFID ORCID Logo 

 Key Laboratory of Mechanics on Disaster and Environment in Western China, The Ministry of Education of China, Lanzhou 730000, China; [email protected] (Y.W.); [email protected] (L.C.); [email protected] (C.Z.); [email protected] (W.C.); School of Civil Engineering and Mechanics, Lanzhou University, Lanzhou 730000, China 
 Key Laboratory of Mechanics on Disaster and Environment in Western China, The Ministry of Education of China, Lanzhou 730000, China; [email protected] (Y.W.); [email protected] (L.C.); [email protected] (C.Z.); [email protected] (W.C.); School of Civil Engineering and Mechanics, Lanzhou University, Lanzhou 730000, China; Harbin Institute of Technology, School of Civil Engineering, Harbin 150090, China; [email protected] 
 Harbin Institute of Technology, School of Civil Engineering, Harbin 150090, China; [email protected] 
First page
1012
Publication year
2022
Publication date
2022
Publisher
MDPI AG
e-ISSN
20724292
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2633130098
Copyright
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.