Content area
As disasters grow in frequency and intensity, the opportunities to apply Artificial Intelligence (Al) to disaster risk reduction are becoming increasingly prominent. This paper discusses various Al-based approaches including crowdsourcing, Internet of Things (loT), aerial imagery analysis, videos from unmanned aerial vehicles (UAVs), as well as airborne and terrestrial Light Detection and Ranging (LIDAR). It also analyses the methodology of Al- and satellite imagery-based approaches to measuring the costs of disasters, using the case of the 2018 earthquake and tsunami in Sulawesi, Indonesia as an example.
Abstract
As disasters grow in frequency and intensity, the opportunities to apply Artificial Intelligence (Al) to disaster risk reduction are becoming increasingly prominent. This paper discusses various Al-based approaches including crowdsourcing, Internet of Things (loT), aerial imagery analysis, videos from unmanned aerial vehicles (UAVs), as well as airborne and terrestrial Light Detection and Ranging (LIDAR). It also analyses the methodology of Al- and satellite imagery-based approaches to measuring the costs of disasters, using the case of the 2018 earthquake and tsunami in Sulawesi, Indonesia as an example.
JEL classification: 033, Q54, 053.
Keywords: Artificial Intelligence (Al), satellite imagery, disaster cost assessment, disaster response, Sulawesi earthquake, Big Data
Foreword
In the face of more frequent and damaging disasters, artificial intelligence (Al) and Big Data are posed to play important roles in enhancing disaster resilience. Al can be used in disaster risk reduction in numerous ways, such as contributing to early warning systems, as well as forecasting and prediction activities. It can also be used for damage assessment, risk mapping, communication support, and real-time monitoring and detection. The use of Al enables faster predictions which would bolster disaster preparation and help with formulating quicker and more efficient responses and recovery strategies. Integrating Al into disaster management and mitigation efforts is therefore not only beneficial, but essential.
As will be discussed further in this paper, Al offers immense value for estimating the costs of disasters. Assessing disaster costs is imperative for decision making, resource distribution, and reconstruction following a disaster; however, various challenges remain, including limitations in data availability and capacity for data collection. Increasing the volume and complexity of data is particularly challenging when using traditional methods. Using Al for disaster cost estimation can serve as a supplementary tool capable of providing benefits such as faster response times, reduced pressure on human resources, and less potential for human error. These could in turn lead to lower costs, increased speed, and improved assessment credibility.
Considering these advantages, the use of Al in disaster risk reduction could provide a valuable contribution to disaster policy-making processes. The earlier OECD report Economic Outlook for Southeast Asia, China and India 2024: Developing Amid Disaster Risks discussed the development of disaster-related technology, highlighting Al's burgeoning role in Emerging Asian countries.
Acknowledgements
The authors· would like to thank Hanran Chen, Prasiwi Ibrahim, Alexander Hume, En-kai Lin, Cheng-Wei Lin for their excellent inputs and support. The authors would also like to thank Setsuko Saya from the OECD Development Centre; David English and Catherine Gamper from the OECD Environment Directorate; Julia Carro and Alice Holt from the OECD Directorate for Science, Technology and Innovation for their useful comments on an earlier version of this paper.
The paper also benefited from discussion with participants at the Asian Regional Roundtable, jointly organised by the OECD, the ASEAN+3 Macroeconomic Research Office (AMRO), the Asian Development Bank (ADB), the ADB Institute (ADBI), and the Economic Research Institute for ASEAN and East Asia (ERIA) in December 2024. We gratefully acknowledge financial support received from the Government of Japan.
*Authors: Kensuke Molnar-Tanaka, OECD Development Centre (corresponding author: [email protected]) and Kuo-Shih Shao, Sinotech. The views expressed in in this paper are those of the authors and do not necessarily reflect the views and opinions of their organisations. Any remaining errors are the responsibility of the authors.
1 Introduction
Recent years have seen more frequent disasters causing greater damages. Artificial intelligence (Al) can play a significant role in optimising disaster response and reducing resource waste, assessing damage, and contributing to the design of recovery plans (OECD, 20241).
Indeed, Al and Big Data have already been applied in disaster risk reduction. For instance, Al-based approaches have contributed to projections of the magnitude of sea level rise (Bahari et al., 20232), which is a pressing issue for many Southeast Asian countries. There are various models, such as machinelearning-based models, that offer short- and long-term projections. For instance, Haasnoot et al. (202113) use Big Data to project the magnitude of sea level rise by 2100 and 2150 under various adaptation scenarios, noting that needs differ by country. Another example is related to extreme heat events; Al-based approaches demonstrate better predictive ability than conventional techniques that rely on scarce observational data (Miloshevich et al., 202347). Rare event algorithms, which are already used for studies in biology, chemistry, and physics can be designed to sample extreme heatwaves in models and reveal their characteristics (Ragone, Wouters and Bouchet, 20175).
Additionally, Al can significantly facilitate the estimation of disaster costs, providing policy makers with a multitude of benefits. Benefits precipitating from Al-based cost estimation, as opposed to traditional approaches, include faster responses, lower human-resource requirements (particularly for specialists, such as electricians or various types of engineers), and less human error. Not only does this reduce costs for policy makers and insurers, but it can also improve the credibility of their assessments.
The purpose of this paper is therefore to examine different uses for Al technologies in disaster response, with a focus on the methodological aspects of measuring disaster costs. The paper explores how Al and satellite imagery can be used to assess damage costs, providing critical information to policy makers for their response planning. In particular, the paper provides methodological discussions in a high level of technical detail on how Al technologies and satellite imagery may be used, using the example of the 2018 Sulawesi earthquake.
The structure of the paper is as follows. The paper begins with a general overview of the use of various Al tools in disaster risk mitigation. A technical overview of a methodology for measuring disaster costs using Al- and satellite imagery-based analysis is explored in Section 3. Section 4 provides a conclusion as well as challenges for the measurement of disaster damage cost using Al.
2 Measuring disaster damage costs with Al tools
Damage cost assessment is pivotal for emergency decision making, resource distribution, and reconstruction activities following disasters, so it is considered one of the most critical tasks in post-disaster response. That being said, measuring disaster costs is far from straightforward, though several challenges can be addressed effectively using Al.
Firstly, there is a notable deficiency in pre-disaster data, as well as limited capabilities in post-disaster data collection and analysis, particularly in developing countries (Wouters et al., 2021), so that an analytical method based on limited traditional datasets in combination with more easily accessible alternative datasets is needed. Secondly, as relying on expert-driven approaches for damage assessment can lead to inconsistencies and unreliability (Edrisi and Askari, 201977), results based on statistical and algorithmic analysis can provide useful supplementary information. Thirdly, the increasing volume and complexity of data available present challenges to traditional data collection and processing methods (Sun, Bocchini and Davison, 20203), hence a more powerful approach is required.
Big Data primarily refers to datasets that are too large or complex to be dealt with by traditional dataprocessing application software. The number of available data sets is growing rapidly now, which increases the difficulty of data processing, but provides valuable sources for more complex and accurate analysis (Yu, Yang and Li, 2018197). Due to its effective application in other fields and previous trials in disaster management, the most recent literature on post-disaster cost assessment recommends the collection and analysis of diverse and extensive datasets, comprising pre-data information, spatial imagery, and mobile data (Jeggle and Boggero, 2018110); World Bank/Global Facility for Disaster Reduction and Recovery, 2023111). It is important to note that appropriate tools and methods for data collection differ by assessment target (Box 1).
In this context, traditional analytical methods are not necessarily sufficient for timely and accurate analysis of Big Data. Artificial intelligence, with its exceptional data processing and analytical capabilities, serves as a useful tool for post-disaster damage cost assessment requiring extensive datasets (Sun, Bocchini and Davison, 2020s). The main categories of Big Data sources include social media, crowdsourcing, mobile GPS, geographic information systems (GIS), Internet of Things (loT), satellite imagery, aerial imagery, and videos from unmanned aerial vehicles (UAVs), as well as airborne and terrestrial Light Detection and Ranging (LiDAR). Some types have been incorporated into analysis due to advancements in data processing techniques (e.g. social media), while others became widely used due to improvements in the quality of data collection (e.g. aerial imagery).
2.1. Social media
Social media platforms have strong interactive capabilities that facilitate the creation, sharing and aggregation of content, ideas, interests, and other forms of expression (Kietzmann et al., 201113). This information encompasses text, images, videos, and more making social media the most widely used tool for online communication in daily life. The main features of social media information include, but are not limited to, real-time dissemination, broad coverage, and a richness in user-generated content.
When disaster strikes, social media is frequently used to access emergency information, send warnings, and communicate with affected individuals (Razavi and Rahbari, 202014). Following a disaster, affected individuals often utilise social media to connect with others, sharing details about the severity of the disaster casualties and property damage. Such information is crucial for evaluating the real-time situation and supporting responses. Therefore, the richness, accessibility, and timeliness of social media data make it a valuable supplement to other data sources, especially in scenarios where traditional ground-based data collection is challenging (Xing et al., 202115) or pre-disaster data for specific areas is unavailable (Khan et al., 202216).
The first step in utilising social media information is extracting key information related to the severity of disasters from a large amount of raw social media data. Consisting of images and textual contents, social media data is usually processed using Convolutional Neural Networks (CNN), which specialise in image processing (Hao and Wang, 20204171; Li et al., 2022118); Xing et al, 202115). Other machine-learning techniques such as random forests are utilised as well (Hao and Wang, 2020171; Khan et al., 2022116). Images and textual contents from social media are input into such machine-learning models, which employ keyword extraction (for text input) and image recognition to extract detailed information and generate classification about the disaster's severity.
Social media information is often collected and processed with the geographical information of the signal. Geotagged social media data can be quickly collected by streaming harvest from the APIs provided by social media companies' platforms. The output geolocation-tagged disaster's severity information extracted from social media is then integrated with additional data, such as architectural vulnerability and the social economic index, thus collectively serving as inputs for further cost assessment, including both traditional estimation models like FEMA model and Al-based models.
The analysis of social media enables the extraction of critical information on disaster impacts, essential for a comprehensive understanding of the damages incurred. Integrating social media analysis with other data such as disaster-causing factors, disaster bearing carriers (exposure and vulnerability) and other factors through neural networks can further refine disaster cost assessment (Li et al., 2022118).
Here are some examples from the literature that utilised social media data in assessing disaster damage cost:
* Earthquake, US: Social media data originating from the larger San Francisco Bay Area around the Napa (California, USA) earthquake were used with a text recognition model. The severity and impact of the earthquake were extracted from a particular social media platform using words such as "earthquake", "damage". Then, the loss model for earthquakes provided by FEMA (Loss = Hazard · Vulnerability · Exposure) was used to generate a modelled damage map for the earthquake event. The result was validated by the US Geological Survey and HAZUS loss model (Resch, Uslánder and Havas, 2017119).
* Typhoon, People's Republic of China (hereafter "China"): Microblog media data, combined with meteorological data (the maximum wind, daily maximum local rainfall), city statistics (urban GDP, resident population, urban annual rainfall, and urban agricultural land area) and historical economic loss reported by government data of Typhoon Mangkhut, Lekima (Guangdong, Zhejiang China) were used with Convolutional Neural Network (CNN) and Artificial Neural Network (ANN) models. Firstly, the text CNN model is trained to classify user inputs collected from the microblog and the correlation between various categories and disaster loss. Secondly, the back propagation ANN model is applied to obtain a reliable rapid damage assessment. The result shows that the model considering social media data is better than the traditional model (Li et al., 2022/18).
However, social media analysis has limitations. Firstly, there may be discrepancies between the demographics of social media users and the affected populations, causing bias in certain areas or populations. Secondly, user habits can lead to inherent biases in the data collected (Reynard and Shirgaokar, 201920), resulting in noisy and unstructured features of social media datasets. Thirdly, there are always concerns related to data security and privacy. Lastly, social media data were mainly used to extract or classify the severity of the disaster, but in many cases, a larger model combined with other types of data will be used to assess the damage cost in detail.
2.2. Crowdsourcing
Crowdsourcing refers to obtaining information, insights, and opinions from a large group of people, typically through the internet and other digital platforms. It usually involves contributions from a wide range of individuals and is used in cases such as product or service reviews, market research, and disaster responses.
Crowdsourcing is not a new data collection technique employed in disaster response, as conducting surveys among disaster-affected populations has long been included in governmental guidance. Within the FEMA framework for post-disaster cost assessment, information referral services used by people and individual client assistance information can be used to help illustrate the scale and magnitude of a disaster. Information reported by individuals in affected areas, including descriptions, photos, and videos of damaged homes and infrastructure can provide a real-time snapshot of the situation. This can aid decision makers in understanding the condition of disaster-affected populations, the severity of the disaster, and the associated cost, thereby facilitating informed decision making.
Since social media data are mostly passively contributed, it may be plagued by issues such as poor quality and scant usefulness, necessitating rigorous data verification and precise data processing. In comparison, data gathered through crowdsourcing surveys are more structured and effectively meet the specific needs of the issuers (Yu, Yang and Li, 2018р)). In recent years, the widespread use of mobile phones and internet has significantly facilitated the online dissemination of surveys, thus accelerating the collection and management of crowdsourced information.
Although crowdsourced information has helped save lives and speed recovery efforts in many disasters worldwide (Riccardi, 201601), the increase in data volume has also posed data processing challenges. During the 2013 Colorado wildfires, authorities were inundated with extensive reports on fire locations and structural damage, the verification and processing of which proved to be laborious and time-intensive (Riccardi, 201621).
The development of artificial intelligence frameworks provides potent tools for processing and analysing such data. This is exemplified by the PetaBencana system of Indonesia. The 2020 torrential rains in Jakarta served as the inaugural application of the platform. As users input flood-related keywords in the platform, the language model prompts them for their geographic location and its specific flood conditions. It then uses the responses to create real-time maps depicting the disaster situation. Thousands of users turned to the PetaBencana chatbot to report rising water levels and other disruptions. The resulting map was consulted more than 259 000 times during the peak phase of the flooding that ravaged the capital.
Other research shows that given the opaque nature of Al models, integrating them with crowdsourced data could potentially enhance the models' accuracy and interpretability (Zhang etal., 20192). The effectiveness of surveys can be compromised by the complexity and subjectivity of the damage classification task and the potential biases in the comprehension of participants, highlighting the importance of survey design in obtaining valid data (Khajwal and Noshadravan, 2021723). Privacy, data protection and security issues are critical when conducting analysis of crowdsourced data, especially when the disaster affects multiple regions with different legal and political systems (Poblet, Garcia-Cuesta and Casanovas, 20174).
2.3. Mobile GPS
Mobile GPS (Global Positioning System) refers to the use of GPS in mobile devices to determine the precise location of the devices. It is a technology that utilises satellites to provide spatial and temporal information anywhere on Earth where there is an unobstructed line of sight to four or more GPS satellites regardless of weather conditions.
In the context of disaster cost assessment, mobile GPS plays a crucial role by enabling accurate and efficient data collection and mapping. It can be used to detect human mobility and behaviour during largescale natural disasters, providing essential information for understanding the extent of damage and planning recovery efforts (Yu, Yang and Li, 201819).
A primary application of mobile GPS information is to enhance social media and crowdsourced data with a geographical tag. On the one hand, disaster-related data with geographic tags can be disseminated to affected communities using real-time mapping, thereby aiding both public evacuation efforts and government-led rescue operations. On the other hand, data detailing the geographic extent of disaster impacts can be combined with pre-existing map data, including geographic, architectural, demographic, and economic information to estimate disaster cost using either Al or traditional models. In the previously discussed examples of social media and crowdsourcing, such data typically features mobile GPS geographical tagging.
Furthermore, based on the three basic components of GPS, which are absolute location, relative movement, and time transfer, analysing shifts in GPS data before and after a disaster can reveal changes in mobility patterns, thus indirectly gauging the impact on lives and serving as a method for indirectly assessing economic losses (Yu, Yang and Li, 20189). Research shows that human behaviour and their mobility following large-scale disasters sometimes correlates with normal mobility patterns, and are also highly impacted by factors including, but not limited to, their social relationship, disaster intensity, damage level, quality and availability of government appointed shelters, news reporting, and large population flow (Song et al., 201425). Analysis of variations in movement trajectories before and after a disaster can provide insight into its severity (Song et al, 201425) and assist in assessing the recovery of economic activities thereafter (Yabe, Zhang and Ukkusuri, 20206). For example, business visitation data from mobile GPS were used to quantify the economic impact of disasters on businesses after Hurricane Maria in Puerto Rico (Yabe, Zhang and Ukkusuri, 2020261). The availability of large-scale human mobility data enables the researcher to observe daily visit counts for businesses in an unprecedented spatial-temporal granularity, and a Bayesian structural time series model was used to predict the counterfactual (which refers to the scenario in which the disaster had not occurred) performances of affected businesses.
In the case of pandemics, GPS technology plays a crucial role in disease prevention and management. Utilising GPS coordinates on smartphones for digital contact tracing enhances the accuracy of location data capture and helps limit the spread of viruses (Ozdenerol, 20231271). Additionally, it is utilised to monitor quarantine adherence and assess social group behaviours for potential contagion risks (Poolian et al., 2022287). During the COVID-19 pandemic, GPS data from mobile devices were used to identify and alert individuals who had come into close contact with infected persons, significantly aiding in rapid response measures, although it faced challenges related to public adoption and data privacy concerns. Besides, mobile GPS data was integrated in pandemic modelling to indicate the COVID-19 transmission dynamics and the effectiveness of non-pharmaceutical interventions. Trace Together in Singapore, Corona 100m in South Korea, COVIDSafe in Australia, Aarogya Setu in India are such examples.
2.4. Geographic Information System (GIS)
Geographic Information System (GIS) is a framework for gathering, managing, analysing, and visualising spatial and geographic data. Due to being efficient and allowing for convenient data integration and communication, GIS is widely used in fields such as urban planning, environmental management, transportation, disaster management.
GIS plays a crucial role in disaster management by providing tools for analysing spatial data, creating maps, and supporting information integration. On one hand, GIS can serve as an integrated platform for data and disaster prediction and assessment models, which allows decision makers to view, manipulate, and extract information from geospatial data, thereby enhancing understanding and facilitating application. On the other hand, GIS acts as an extensive spatial database, where historical data, alongside pre- and post-disaster information, are utilised as data sets for assessing disaster costs (Choi and Song, 20221291; Jin et al., 2022130).
A notable example is the use of GIS as an integrated analytic platform to present economic loss data from four earthquakes in the Yunnan Province of China (Zhang, Yang and Cao, 201931). The isoseismal intensity model used to calculate the earthquake disaster region, the damaged buildings model, the economic loss assessment model, and the fatalities assessment model were incorporated into layers of the GIS framework to create a disaster assessment map, using geographic and socio-economic data as the inputs for these models. Disaster management requires the integration of information from a variety sources, making GIS a key technology to collect, store, analyse, and display a large amount of spatially distributed information layers. It organises layers of information into visualisations using maps and 3D scenes and is critically important for comprehensive disaster management.
Another example is using GIS to develop a fragility method in building cost assessment (Choi and Song, 2022 291; Jin et al., 2022307). Geographic distribution of properties, fragility functions and curves, and the severity of disasters are all considered necessary information for inferring the loss to buildings. By overlaying the map of hazard level onto the property distribution map, the damage estimation can be carried out either in an external programme or within the GIS application, depending on the complexity of fragility functions and the damage estimation model.
Due to its comprehensive capabilities in many fields, plenty of GIS databases are provided by governmental bodies, scientific research institutions, and commercial entities. These databases can be regarded as powerful tools in disaster management as well. Since the effectiveness of GIS depends highly on the accuracy and timeliness of data, as well as technical expertise, the high cost and availability also presents challenges for its application in developing regions.
2.5. Internet of Things (loT)
The Internet of Things (loT) is the inter-networking of physical devices and objects whose state can be altered via the internet, with or without the active involvement of individuals (OECD, 2023132). loT refers to the network of physical objects equipped with sensors, software, and other technologies to connect and exchange data with other devices and systems over the internet. The key components of loT include devices, sensors, connective networks, data processing tools, automation, user interfaces, and data storage via cloud technology or other means. loT can enable automation of routine tasks, improve realtime data-based decision making, and enhance responsive services, therefore it has been widely used in smart homes, healthcare, manufacturing, agriculture, and other automation fields.
In disaster management, rapidly acquiring timely and effective information is essential for decision-makers, and loT sensors can be widely deployed across key infrastructures and strategic locations to monitor various types of sought-after data. Encompassing technologies such as remote sensing, smart cameras, water level sensors, smart home, smart wearables, weather sensors, and smart vehicle data, loT can play a pivotal role throughout various phases of disaster management, especially helping reduce physical and human damage through early warning systems, as well as adequately generating data critical for postdisaster analysis (Van Ackere et al., 2019133).
In early warning systems, lo T-enabled sensors can monitor key environmental conditions, which can be used to detect early signs of disasters, triggering early warnings to authorities and communities. Besides, loT sensors can detect stress, cracks, and other parameters of key infrastructure, which can be used for infrastructure monitoring, and access the risks of building damage during disasters. During the postdisaster assessment phase, real-time data collected by earthquake, water level, or other available sensors can help decision makers effectively obtain information under adverse road or weather conditions.
loT-enabled sensors are used in typical disasters, including earthquakes, wildfires, and floods, among others (Damasevicius, Bacanin and Misra, 202334). In earthquake monitoring, sensors were used to monitor the ground motion and seismic activity, which helps to issue alerts and warnings to the public, thereby providing faster and more accurate assessment while traditional methods are hindered by the blocked road. In flood monitoring, flood sensors were used to detect the water levels and changes in water pressure. Combined with drones and satellite imagery, loT contributed significantly to the response to Hurricane Harvey in 2017. In wildfire monitoring, loT sensors were used to monitor air quality, measuring pollutants and smoke levels to assess environmental impacts and health risks, helping guide public responses, evacuation plans, and severity assessment in the California wildfires of 2018.
Machine-learning algorithms, including support vector machines, artificial neural networks, clustering, and other regression models are used to process and analyse loT data (Arshad et al., 2019135; Deowan et al., 2022367). Based on these algorithms, researchers have developed numerous loT-based neural network models that enhance disaster management strategies. For instance, sensors gathering data on water levels and water pressure are already utilised to capture the real-time dynamics of flood events. Alongside the advancements in machine-learning models like Convolutional Neural Networks (CNNs), this sensory data, when integrated with photographic information, facilitates predictions about the future dynamics of floods (Arshad et al., 201935).
The advantages of loT for disaster management include real-time monitoring, early detection, improved communication, and automated response. But the reliability of loT sensors may be challenged in extreme conditions, and the sensor system would need to be installed and maintained with expertise (Damasevicius, Bacanin and Misra, 2023(34)). Besides, data security and the risk of cyber-attacks pose significant challenges to the utilisation of loT data. loT devices may be susceptible to different attacks and threats, leading to information manipulation and fake detection issues (Van Ackere et al., 2019133).
2.6. Satellite imagery
Satellite imagery refers to images of Earth taken by satellites orbiting the planet. These images are captured using various types of sensors, including optical, radar, and infrared, with each providing different kinds of data about the Earth's surface and atmosphere. These are applied in many fields including, but not limited to, environmental monitoring, agriculture, urban planning, and disaster management.
The recent development of high-resolution satellite imagery technologies has established a solid foundation for analysing satellite images pre- and post-disasters (Yu, Yang and Li, 201819). Satellite imagery, compared to other data collection tools, provides extensive spatial and temporal coverage (Kim et al., 2022377) and is less technically demanding with higher cost-efficiency than other data networks, making it an essential tool for post-disaster analysis (Wouters et al., 20216).
Comparative analysis of pre- and post-disaster satellite imagery is a typical approach for assessing disaster costs, often utilised with aerial imagery and other types of data (Xie et al., 2020738). By comparing time series images, trained models can identify and classify areas of destruction, partial damage, or undamaged buildings, thereby helping to prioritise response efforts and conduct further cost assessment. Such approaches are extensively applied to the evaluation of structural building damage (Miura et al., 2022139), and several research institutions have established databases of satellite images depicting building losses, which can serve as training sets for future disaster assessments.
Machine-learning techniques are particularly effective in analysing satellite imagery from pre- and postdisaster, providing key insights into the extent of damages incurred (Kim et al., 2022737); Wouters et al., 2021}; Xie et al., 202035). Some examples of how satellite imagery and machine learning have been used to assess disaster damages are as follows:
* Hurricane, US: Deep-learning-based model that uses pairs of pre- and post-disaster satellite images to identify water-related disaster-affected regions was employed (Kim et al, 2022137). Hurricane Matthew, Harvey, Florence, Michael, Tornado Joplin, Moore, Tuscaloosa-Birmingham as well as other water-related disaster data were used as training data, and a case study of Hurricane Lota was tested. The model successfully detected the damage severity in the disasteraffected area with 97.5% accuracy.
* Typhoon, Philippines: Together with insurance records, analysis of satellite imagery could provide the economic losses without visiting the affected areas and field investigations (Miura et al., 202239). Insurance records for damaged buildings in Typhoons Jebi 2018 and Faxai 2019, highresolution RS images with the spatial resolution of 1m or less, such as satellite data and aerial photographs, observed after the typhoon disasters were used as inputs for the model. The relationship between area-based loss rates and building damage ratios were acquired by regression.
* Wildfire, US: Machine-learning methods including LR, RF, GBM, FFN, CNN, U-Nets, CNN-DW were testified by building damage assessment model after the wildfires in California (Xie et al., 202038). A dataset of building damage assessment and building properties compiled by the California Department of Forestry and Fire Protection was used in the study. Gradient boosting machines (GBM) exhibited a superior performance over other models in the wildfire context.
In addition, at the Asian regional level, "Sentinel Asia" was established for the effective information sharing. The advantages of applying satellite imagery to disaster management include, but are not limited to, widearea coverage, high cost-efficiency, and reliability; however cloud cover and poor weather conditions bring challenges for its application. Therefore, enhancing the resolution of satellite images and integrating them with additional disaster-related data is a prospective direction for advancing research in this domain.
2.7. Unmanned Aerial Vehicles (UAVs)
Unmanned Aerial Vehicles (UAVs), also referred to as drones, are aircraft systems controlled through onboard computers or remotely by a human operator. Due to their versatility and efficiency, UAVs are now used in various fields, including agriculture, environmental monitoring, surveillance and security, and disaster management.
Advancements in technology, reductions in costs, and simplifications in operations have significantly broadened the applications of unmanned aerial vehicles (UAVs) in disaster management (Pi, Nath and Behzadan, 2021/40). In the aftermath of many disaster types, affected areas are often inaccessible or too dangerous for human responders. UAVs can serve as versatile platform tools, as they can be adapted and combined with other tools to fulfil specific tasks in such areas, which exemplifies their comprehensive utility (Zwegliñski, 20201411). Drones can also produce orthophotos and topographical maps that can be more granular and detailed as compared to satellite imagery, providing valuable real-time information to guide rapid response. Due to these benefits, UAV technology is being used more frequently by governments to respond to climate-induced disasters as well as to map out future geohazard risks.
In disaster damage cost assessment, drones are primarily employed for collecting imagery data. For instance, when a devastating flood in Nepal in June 2021 blocked mountain roads with rocky debris and skies were too cloudy for helicopters, drones were sent to identify damaged houses, map out inundation areas, and record the topographical changes caused by the massive erosion and deposition.
The increase of information volume of photographic and video data often exceeds the capabilities of traditional analytical tools. Machine-learning algorithms, such as Support Vector Machines (SVMs), Convolutional Neural Networks (CNNs),and other neural network based models are increasingly utilised for processing data collected by drones and construction of models based on them (Munawar et al., 2021425; Pi, Nath and Behzadan, 202140).
For example, UAV imagery and building properties data in affected regions served as inputs for analysis of the damage after the 2019 flood in Malawi (Wouters et al., 2021). After the flood hazard data was acquired from other software, UAV imagery was processed by an SVM model to determine the exposure and vulnerability of these affected buildings. The successful use of these technologies together indicate that UAV imagery can be used as an adaptive method to effectively assess the damage curve of affected buildings in data-poor regions. Another example shows that CNN models also worked well in such data processing. Pre- and post-flood images from UAVs and publicly available online sources served as inputs, and the result shows that the method is successful in detecting disaster regions with high accuracy (Munawar et al, 202142).
UAVs still have several issues needing to be resolved, such as short battery life, which leads to a limited area of coverage; unforeseen behaviour in different atmospheric conditions; limited scope of pilot training for users; and legislation that severely limits the use of UAVs in most countries (Yu, Yang and Li, 2018). The optimisation of flight algorithms for drones represents a critical research domain in their post-disaster deployment. Techniques like PSO, GWO, WOA, BMO, and DGBCO are used for optimising the flight paths of individual drones (Qadir et al., 202243), and various methods are developed for the synchronised data collection efforts of drone fleets (Zhu et al., 201944).
2.8. Light Detection and Ranging (LiDAR)
LiDAR, which stands for Light Detection and Ranging, is a remote sensing technology that uses laser light to measure distances to a target. By emitting laser pulses and measuring the time it takes for the pulses to reflect from surfaces, LIDAR systems can create precise, three-dimensional maps of the environment. Due to its high precision, LIDAR is widely applied in topographic mapping, environment monitoring, operation of autonomous vehicles, and disaster management.
Compared to using satellites and unmanned aerial vehicles (UAVs) for data collection, LIDAR tools require more time and incur higher costs. However, LIDAR can be used to generate high-resolution data, thus providing a valuable supplement to other data-gathering instruments (Muhadi et al., 2020145).
Radars excel in acquiring sub-metre precision data for ground, vegetation, buildings, and man-made features, facilitating the construction of three-dimensional models (Trepekli et al., 2022146); Yamazaki, Liu and Horie, 2022471). It can identify shifts, collapses, and other forms of structural damage that might not be easily visible from ground-level inspections. Furthermore, unlike imagery collection that is often hindered by nocturnal settings and climatic conditions, radar tools are operable under adverse weather circumstances (Muhadi et al., 2020j45)).
The use of Lidar was tested in Hurricane Sandy. The research team scanned roughly 80 miles of stormravaged neighbourhoods, and the high-resolution 3-D output helped officials in these municipalities and others develop improved response strategies and prepare critical infrastructure to withstand the effects of disaster events, which supported preliminary damage evaluation and reconstruction surveying. It can be used to extract features of collapsed buildings, as well (Yamazaki, Liu and Horie, 2022477). Five-temporal LiDAR DSM data of the affected region in the 2016 Kumamoto Earthquake served as input. The result was validated with damage survey data and shows LiDAR data can extract building collapse effectively.
Despite being of the highest resolution, high cost and large data volume are main disadvantages of LIDAR in disaster management, bringing challenges for applications in developing countries. Therefore, in the damage assessment models based on terrain inputs, LIDAR data can be used as a supplementto satellite and drone imagery data, providing precise data collection for key infrastructure and severely affected areas.
3 Methodology of Al- and satellite imagery-based analysis of the 2018 Sulawesi earthquake
3.1. Introduction
Traditional methods for identifying the extent of disaster-affected areas and measuring disaster damage costs are often time-consuming and inefficient. Moreover, human access to these areas is typically limited during and immediately after an event. Therefore, there is a pressing need to develop a more systematic and rapid disaster assessment approach that enables the timely generation of disaster impact data and disaster situation. This paper proposes a methodology that integrates satellite imagery with Al models to facilitate the rapid identification of disaster hotspots and damage estimation.
In recent years, commercial optical satellites have improved significantly, enabling the capture of highresolution images covering extensive areas across any region of the globe. These satellite images have become indispensable for observing Earth, offering crucial data that, when integrated with cutting-edge technologies such as artificial intelligence (Al), enable a wide range of applications. Classifying and identifying various objects in satellite imagery has emerged as a central focus of Al research and development. As a result, numerous open-source and highly efficient Al model architectures are now readily available. For practical applications, users can select and adapt pre-existing Al model architectures, thereby circumventing the substantial costs and time associated with developing custom Al model architectures from the ground up.
Concisely, the disaster assessment method proposed in this paper utilises satellite imagery to remotely capture the landscape of disaster-affected areas. Al models are then employed to extract meaningful information from the imagery, such as the location and size of individual buildings. By further analysing disaster-specific features within the images (such as debris, sediment accumulation, etc.), the Al model can delineate the impacted areas. By using Al- and satellite imagery-based analysis, this paper enables a rapid estimation of the extent of the damage losses.
Against this backdrop, the focus of this section is to examine a potential methodology of Al- and satellite imagery-based analysis, and to test it by applying it to the 2018 Sulawesi earthquake (Figure 1). Palu Bay, in Central Sulawesi Province, Indonesia was affected by a series of strong earthquakes with magnitudes of 7.5-7.6 on 28 September 2018.
For the specific case of detecting damaged buildings in Palu City, the methodologies of Al- and satellite imagery-based analysis used in this paper are as follows (Figure 2): Both pre-event and post-event satellite imagery were acquired and processed using two distinct Al model architectures. These models were trained with the data specific to the case of the 2018 Sulawesi earthquake. The first Al model (Mask RCNN) focuses on identifying buildings in pre-event images and generating building footprints (BFTs). Subsequently, basic attributes are assigned to each BFT, enabling the estimation of reconstruction costs for the buildings in their original location and size. The second Al model (U-Net) is designed to identify disaster-affected areas in post-event images, including the remnants of collapsed buildings from earthquakes and areas impacted by tsunamis.
By overlaying the BF Ts from pre-event images onto the disaster-affected regions identified in post-event imagery, the BF Ts contained in disaster-affected regions would be viewed as damaged buildings. After identifying the damaged buildings and their basic attributes, the reconstruction costs of these damaged buildings may be estimated based on the Indonesia construction cost reference.
The data used for this process includes Pleiades satellite images, building occupancy maps, and OpenStreetMap (OSM) building footprints (BF Ts). The two satellite images (pre- and post- disaster) with 50-centimetre spatial resolution and four spectral bands are used for the identification of damaged buildings. The building occupancy map marks four categories: commercial, residential, public, and industrial. The BF Ts from OSM served as labelled data for Al model training.
3.2. Methodology
3.2.1. Data
VHR Satellite Imagery
The two satellite images utilised in this study are 50-centimetre resolution, four-band Pleiades satellite images, captured on the 27 and 30 September in 2018, which are pre-event and post-event images individually (Table 1). The capture date is very close to the event date of 28 September 2018, making it suitable for this study. Please note that the used images are orthoimages, therefore they cannot be used for terrain data generation and cannot derive any elevation or height data for buildings.
Building Occupancy Map Generation
The building occupancy map is created for building occupancy extraction by referring to satellite imagery and a public tool that provides street-level imagery. It features four categories: commercial, residential, public, and industrial, as shown in Figure 3. Details of these classifications are shown in Table 2.
3.2.2. Review of Al Models
In this paper, two Al models with different uses are employed: one is used to capture damage-related information (building data in the case of this paper), while the other is designed to detect areas affected by disasters. The technical details of these two Al models will be explained in the following sections.
Mask R-CNN for Building Footprint (ВЕТ) Extraction (collecting information before disasters)
As deep learning and convolutional neural networks (CNN) provide a robust architecture for image classification (Yang et al., 2018481; Lecun et al., 199849); Liu et al., 20191507), CNN series models have been used in remote sensing fields for building extraction. CNN is composed of multiple convolution and pooling layers that collaborate to extract image features by moving the kernel. The output of CNN is a confidence score for each class, the framework of CNN is shown in Figure 4.
In this paper, we use building data to construct damage-related information. To do so, we need to identify the location, shape, and size of buildings from satellite imagery. This procedure is called "building footprint extraction". Given the strong capabilities of CNN in computer vision, we employ an advanced CNN-based Al model called Mask R-CNN. This model is designed to detect specific objects within an image and outline their contours, making it suitable for identifying buildings in satellite images and drawing their boundaries. By using Mask R-CNN, we can extract the shapes of all visible buildings within the satellite image coverage area, allowing us to calculate the area and location of each building.
The task of building footprint extraction needs to detect multiple building objects and delineating their boundaries, which includes both object detection and semantic segmentation. Object detection models, such as R-CNN (Region-based Convolutional Neural Network), YOLO (You Only Look Once), and SSD (Single Shot MultiBox Detector), typically use bounding boxes to localise buildings. As for semantic segmentation, it means pixel-based classification achieved through an Encoder-Decoder architecture.
The methodology that integrates object detection and semantic segmentation is known as instance segmentation, with Mask R-CNN (He et al., 201752) being a prominent example. Mask-R-CNN is an extension of Faster R-CNN (Ren et al., 2017153) and its basic architecture is same as Faster R-CNN. Therefore, architecture of Faster R-CNN will be introduced first, then the improvement of Mask R-CNN is added afterwards.
Faster R-CNN consists of FPN (Feature Pyramid Network), RPN (Region Proposal Network) and Rol (Region of Interest) pooling. The framework of Faster R-CNN is shown in Figure 5. The features of image are extracted through FPN which fuse feature maps of different level for extracting buildings of different scales. The architecture of FPN is showed in Figure 6. RPN will select region proposal of objects by setting anchor boxes with different size/aspect ratios. In general, there are nine possible anchors per pixel, as Figure 7 shows. Then, the possible anchor boxes are classified as foreground or background and further ranked for selecting the final bounding boxes, also known as region proposals. After region proposals have been determined, the corresponding area of feature layers will be resampled by Rol pooling. In the last head part, the bounding boxes are classified and fine-tune their position by regression. In order to achieve building boundary delineation, Rol pooling is replaced with RolAlign (Figure 8) for preventing location shift and adds another FCN (Fully Convolutional Network) branch for object semantic segmentation. The final framework of Mask-R-CNN as used in this project is shown in Figure 9 and its architecture is shown in Figure 10. Numerous published papers have applied Mask-R-CNN to extract BFTs (Hu and Guo, 201954; Lv et al., 2020155; Ohleyer, 2018567; Stiller et al., 20191571) demonstrating that Mask-R-CNN is a robust and well-developed methodology for extracting building instance with their contour.
U-Net for Building Ruin Area Extraction (detecting affected areas after disasters)
To compute the areas of buildings affected by disasters, this paper introduces another Al model called UNet which is designed to recognise and highlight specific features in satellite imagery that are commonly associated with disaster impacts, such as debris, rubble, or sediment. U-Net outlines the spatial extent of these features in a procedure is called pixel-based classification, also known as semantic segmentation, where each pixel in the image is categorised based on what it represents. Through this process, U-Net allows us to map hotspots of disasters, which shows the significant landscape changes caused by disasters.
When discussing semantic segmentation, its pioneer FCN should be mentioned first. In 2015, Shelhamer, Long and Darrell (201758) proposed FCN, which replaces the final fully connected layer with convolution layers to enable pixel-based classification. FCN follows an Encoder-Decoder architecture. The Encoder, similar to VGG net, learns image features through multiple convolution layers for down-sampling, but the fully connected layer used for classification is removed and replaced with a 1x1 convolution layer. This layer ensures that the depth of the output feature map equals the number of classification categories. The Encoder's output is then fed into the Decoder, where multiple de-convolution layers perform bilinear upsampling on the feature maps to predict categories per pixels. The process gradually increases the feature map size to match the size of the input image. The architecture of FPN is shown in Figure 11. The final output of FCN is a heatmap with multiple channels, each representing a different category. The pixel value in each channel indicates the probability of the corresponding category. Each pixel will have probabilities for different categories, and the pixel is classified into the category with the highest probability. The illustration of the heatmap is shown in Figure 12.
Afterwards, for medical applications, (Ronneberger, Fischer and Brox, 20150) introduced U-Net, designed to make predictions with a small amount of data. U-Net is developed based on FCN with the Encoder-Decoder architecture referred to here as the contracting path and expanding path. U-Net employs 3x3 convolution layers, de-convolution layers, and 2x2 pooling layers during the contracting and expanding processes. The number of layers used in both paths is identical, and convolution layers in the expansive path are concatenating- with the corresponding convolution layers in the contracting path via skip connection. Unlike FCN's use of summation, U-Net's concatenation methods will thicken the depth of the feature maps. Finally, a 1x1 convolution layer is used to convert the number of feature map channels to the number of categories. Due to the symmetric architecture, this CNN model is named U-Net, the architecture of which is shown in Figure 13.
Skip connection mentioned above is used to deal with the trade-off between semantics (global information) and location (local information). Deeper models can capture global information but may lose image details, while shallower models capture local details but fail to represent the overall semantics. To address this, U-Net and FCN models include a skip connection, where feature maps from both deep and shallow layers are adjusted to the same size and summed as input to the de-convolution layer. The design of skip connection allows these Al models to capture semantic and locational information simultaneously.
U-Net has been applied in remote sensing fields for damaged building area extraction. For example, Bai, Mas and Koshimura (2018/61) developed a U-Net convolutional network-based framework for rapidly mapping damage from satellite imagery, using the 2011 Tohoku earthquake-tsunami as a case study. Wu et al. (202152) proposed a Siamese neural network with attention U-Nets to detect and classify damaged building from pre- and post-disaster remote satellite image. Deng and Wang (2022531) developed an improved two-stage U-Net model for post-disaster damaged building assessment, achieving enhanced accuracy in both building localisation and damage classification using high-resolution satellite image.
3.2.3. Al Model Training
To enable the Al models to provide more accurate information in specific scenarios - in our case, satellite images of Palu City - the models need to be trained. Model training refers to the process where experts provide category-specific knowledge as learning material, allowing the Al to repeatedly learn and improve its ability to perform specific tasks.
During training, the materials provided by experts consist of two components: the base data (in this study, satellite imagery) and the expert knowledge, often referred to as labelled data. In our case, this includes expert-drawn building boundaries and disaster feature areas on the satellite images.
In the training phase, the Al model learns to recognise buildings by viewing satellite image alongside the expert-provided annotations. Once trained, the Al can identify buildings in new satellite images independently by applying what it learned during training, even without human-provided labels.
Training Details of Mask R-CNN
The task of Mask R-CNN in this paper is to extract buildings from satellite imagery. To train the Mask RCNN model, two elements are required: satellite imagery as the base data and building contours within images as the labelled data. Ideally, the satellite images used for training should be captured before the disaster occurs, so that the images in the training data set are of undamaged buildings. The following paragraph explains in detail how the labelled data for our case were obtained and how the satellite images and annotations were prepared as training materials for the Al model.
For Mask R-CNN training, both the pre-event satellite image and labelled data showing building contours are required. These data undergo a data pre-processing procedure to generate image chips which are used as input for the Mask R-CNN model training. The aforementioned pre-event satellite imagery is used in this process, while the labelled data is sourced from OSM. OSM is a collaborative mapping project that provides free, editable geographic data, allowing users worldwide to contribute and update maps with detailed information including building features. Due to the limited time, we used OSM BFTs within a 10 km? area as labelled data rather than generating new BFTs. The OSM BFTs are shown in Figure 14 with red outline. Although the main event discussed in this study is the 2018 Sulawesi earthquake, the 2018 OSM data is incomplete and sparse. Therefore, the 2024 OSM data with more complete building features is used as our labelled data. The area with the least landscape changes between 2018 and 2024, and the most diverse building types, is selected as the coverage for sourcing our labelled data.
The initial step in pre-processing the data involves clipping the satellite imagery and labelled data into image chips, which are then divided into training, validation, and testing datasets. To enhance the Al model's performance, data augmentation techniques such as random rotation and brightness adjustment are applied, thereby providing additional building information for Al model training.
During the model training step, multiple sets of hyperparameters were tested to optimise the performance of the Mask R-CNN model. The backbone of Mask R-CNN is ResNet-50. The learning rate is set to 6.3096е-05.
Training Details of U-Net
The task of U-Net in this paper is to identify disaster-affected areas from satellite imagery. To train the U-Net model, two elements are also required: satellite imagery as the base data, and the coverage of disaster-related features within the image as labelled data. The satellite imagery used for training must be captured after the disaster, as they need to reflect the specific landscape changes caused by the disaster event.
Assign Building Attributes
The BFTs extracted by the Al model contain basic building information, such as location and area. However, to estimate economic loss more accurately, it is important to enrich these BF Ts with additional attributes, including floor area, coordinates, and reconstruction cost (Table 3). To achieve this, local building records are utilised to develop statistically derived rules used to regress these additional attributes.
Finally, the BF Ts with building attributes could be viewed as a kind of Economic Exposure Data (ЕЕ).
3.2.4. Damaged Building Identification
After completing the procedures, we obtain both the information for buildings and the disaster hotspots. These types of data can then be combined to assess the effects of the disaster on buildings. By incorporating the previously established EED, we can further estimate the damage of building. The details of integrating outputs from the two Al models to evaluate disaster impacts on buildings are explained below.
Once the BF Ts and ruin area extractions are completed, an overlay of BFTs and ruin areas is analysed to identify damaged buildings. As shown in Figure 15(a), the BF Ts (yellow outline polygons) are extracted from the pre-event image, and the ruin areas (red areas) in Figure 15(b) are extracted from the post-event image. Each BFT is overlaid on ruin areas as shown in Figure 15(c). If the BFT contains more than a cut off percentage of ruin area, it is considered to be a damaged building. Currently, the cut off is set as 20%.
In this methodology, the estimated reconstruction cost refers to the expense of rebuilding the damaged building based on the original construction type and size, which does not include interior movable property and land value.
4 Conclusion
Recent years have seen more frequent disasters causing greater damages. Artificial intelligence can play a significant role in optimising disaster response, especially in disaster damage cost estimation. The Albased approach of damage cost estimation has various advantages including faster responses, lower human-resource requirements, less opportunity for human error and the ability to overcome policy-relevant information gaps by complementing limited data from other sources, a common issue in Southeast Asia.
After outlining the use of various Al tools in disaster risk mitigation, the paper discussed the methodology of disaster cost measurement using Al and remote sensing approaches, based on the case of the 2018 Sulawesi earthquake. The detailed examination of a methodology of Al- and satellite imagery-based analysis shows that pre-event and post-event satellite imagery, acquired and processed using two distinct Al models, can identify damaged buildings and their attributes from available satellite imagery, allowing for a fairly rapid estimation of repair or replacement needs and costs.
While this is undoubtedly beneficial, some challenges remain. Reconstruction cost estimates using the methodology described in this paper do not include interior movable property and land. In addition, the type, size, location, and quantity of the rebuilt buildings may differ from the original (for reasons of technological improvement, for instance), meaning that estimated reconstruction costs could differ from the actual reconstruction costs. Nonetheless, the analysis can still serve to provide a fast, preliminary estimation of the damage costs of disasters.
References
Arshad, В. et al. (2019), "Computer Vision and loT-Based Sensors in Flood Monitoring and [35] Mapping: A Systematic Review", Sensors, Vol. 19/22, р. 5012, https://doi.org/10.3390/s19225012.
Bahari, N. et al. (2023), "Predicting Sea Level Rise Using Artificial Intelligence: A Review", [2] Archives of Computational Methods in Engineering, Vol. 30/7, pp. 4045-4062, https://doi.org/10.1007/s11831-023-09934-9.
Bai, Y ., E. Mas and $. Koshimura (2018), "Towards Operational Satellite-Based Damage- [61] Mapping Using U-Net Convolutional Network: A Case Study of 2011 Tohoku EarthquakeTsunami", Remote Sensing, Vol. 10/10, р. 1626, https://doi.org/10.3390/rs10101626.
Choi, E. and J. Song (2022), "Clustering-based disaster resilience assessment of South Korea - [29] communities building portfolios using open GIS and census data", International Journal of Disaster Risk Reduction, Vol. 71, р. 102817, https://doi.org/10.1016/j.ijdrr. 2022.102817.
Damasevicius, R., N. Bacanin and $. Misra (2023), "From Sensors to Safety: Internet of [34] Emergency Services (10Е5) for Emergency Response and Disaster Management", Journal of Sensor and Actuator Networks, Vol. 12/3, р. 41, https://doi.org/10.3390/jsan12030041.
Deng, L. and Y. Wang (2022), "Post-disaster building damage assessment based on improved [63] U-Net", Scientific Reports, Vol. 12/1, https://doi.org/10.1038/s41598-022-20114-w.
Deowan, M. et al. (2022), "Smart Early Flood Monitoring System Using loT", 2022 14th [36] Seminar on Power Electronics and Control (SEPOC), https://doi.org/10.1109/sepoc54972.2022.9976434.
Edrisi, A. and M. Askari (2019), "Probabilistic budget allocation for improving efficiency of [7] transportation networks in pre-and post-disaster phases", International Journal of Disaster Risk Reduction, Vol. 39, р. 101113, https://doi.org/10.1016/j.ijdrr.2019.101113.
Gao, Y. et al. (2022), "SFSM: sensitive feature selection module for image semantic [59] segmentation", Multimedia Tools and Applications, Vol. 82/9, pp. 13905-13927, https://doi.org/10.1007/s11042-022-13901-0.
Haasnoot, M. et al. (2021), "Long-term sea-level rise necessitates a commitment to [3] adaptation: A first order assessment", Climate Risk Management, Vol. 34, p. 100355, https://doi.org/10.1016/j.crm.2021.100355.
Hao, H. and Y. Wang (2020), "Leveraging multimodal social media data for rapid disaster [17] damage assessment", International Journal of Disaster Risk Reduction, Vol. 51, p. 101760, https://doi.org/10.1016/j.ijdrr.2020.101760.
He, К. et al. (2017), "Mask R-CNN", 2017 IEEE International Conference on Computer Vision (ICCV), https://doi.org/10.1109/iccv.2017.322.
Hu, Y. and F. Guo (2019), "Automatic Building Extraction Based on High Resolution Aerial Images", 2019 3rd International Conference on Electronic Information Technology and Computer Engineering (EITCE), pp. 1017-1020, https://doi.org/10.1109/eitce47263.2019.9094824.
Jeggle, T. and M. Boggero (2018), Post-Disaster Needs Assessment, European Commission, GFDRR, UNDP, and the World Bank, https://doi.org/10.1596/30945.
Jin, Y. et al. (2022), "Geomatic-Based Flood Loss Assessment and lts Application in an Eastern City of China", Water, Vol. 14/1, р. 126, https://doi.org/10.3390/w14010126.
Khajwal, A. and A. Noshadravan (2021), "An uncertainty-aware framework for reliable disaster damage assessment via crowdsourcing", International Journal of Disaster Risk Reduction, Vol. 55, р. 102110, https://doi.org/10.1016/j.ijdrr. 2021.102110.
Kietzmann, J. et al. (2011), "Social media? Get serious! Understanding the functional building blocks of social media", Business Horizons, Vol. 54/3, pp. 241-251, https://doi.org/10.1016/j.bushor.2011.01.005.
Kim, D. et al. (2022), "Disaster assessment using computer vision and satellite imagery: Applications in detecting water-related building damages", Frontiers in Environmental Science, Vol. 10, https://doi.org/10.3389/fenvs.2022.969758.
Lecun, Y. et al. (1998), "Gradient-based learning applied to document recognition", Proceedings of the IEEE, Vol. 86/11, pp. 2278-2324, https://doi.org/10.1109/5.726791.
Li, S. et al. (2022), "Study on typhoon disaster assessment by mining data from social media based on artificial neural network", Natural Hazards, https://doi.org/10.1007/s11069-02205754-5.
Liu, P. et al. (20193, "Building Footprint Extraction from High-Resolution Images via Spatial Residual Inception Convolutional Neural Network", Remote Sensing, Vol. 11/7, p. 830, https://doi.org/10.3390/rs11070830.
Lozano, J. and |. Tien (2022), "Data Collection Tools for Post-Disaster Damage Assessment of Building and Lifeline Infrastructure Systems", SSRN Electronic Journal, https://doi.org/10.2139/ssrn.4292732.
Lv, B. et al. (2020), "Research on Urban Building Extraction Method Based on Deep Learning Convolutional Neural Network", IOP Conference Series: Earth and Environmental Science, Vol. 502/1, p. 012022, https://doi.org/10.1088/1755-1315/502/1/012022.
Miloshevich, G. et al. (2023), Probabilistic forecasts of extreme heatwaves using convolutional neural networks in a regime of lack of data, https://arxiv.org/pdf/2208.00971.pdf.
Miura, H. et al. (2022), "Empirical estimation based on remote sensing images of insured typhoon-induced economic losses from building damage", International Journal of Disaster Risk Reduction, Vol. 82, р. 103334, https://doi.org/10.1016/j.ijdrr.2022.103334.
Muhadi, М. et al. (2020), "The Use of LIDAR-Derived DEM in Flood Applications: A Review", Remote Sensing, Vol. 12/14, р. 2308, https://doi.org/10.3390/rs12142308.
Munawar, H. et al. (2021), "UAVs in Disaster Management: Application of Integrated Aerial [42] Imagery and Convolutional Neural Network for Flood Detection", Sustainability, Vol. 13/14, р. 7547, https://doi.org/10.3390/su13147547.
OECD (2024), Economic Outlook for Southeast Asia, China and India 2024: Developing amid [11 Disaster Risks, OECD Publishing, Paris, https://doi.org/10.1787/3bbe7dfe-en.
OECD (2023), Measuring the Internet of Things, OECD Publishing, Paris, [32] https://doi.org/10.1787/021333b7-en.
Ohleyer, $. (2018), Building segmentation on satellite images. , [56] https://project.inria.fr/aerialimagelabeling/files/2018/01/fp ohleyer compressed.pdf.
Ozdenerol, E. (2023), "The Role of GIS in COVID-19 Management and Control", in The Role [27] of GIS in COVID-19 Management and Control, CRC Press, Boca Raton, https://doi.org/10.1201/9781003227106-1.
Pi, Y., N. Nath and A. Behzadan (2021), "Detection and Semantic Segmentation of Disaster [40] Damage in UAV Footage", Journal of Computing in Civil Engineering, Vol. 35/2, https://doi.org/10.1061/(asce)cp.1943-5487.0000947.
Poblet, M., E. García-Cuesta and P. Casanovas (2017), "Crowdsourcing roles, methods and [24] tools for data-intensive disaster management", Information Systems Frontiers, Vol. 20/6, pp. 1363-1379, https://doi.org/10.1007/s10796-017-9734-6.
Poolian, A. et al. (2022), "On the Implementation of Contact Tracing via GPS", 2022 IEEE [28] Zooming Innovation in Consumer Technologies Conference (ZINC), https://doi.org/10.1109/zinc55034.2022.9840554.
Qadir, Z. et al. (2022), "Autonomous UAV Path-Planning Optimization Using Metaheuristic [43] Approach for Predisaster Assessment", IEEE Internet of Things Journal, Vol. 9/14, pp. 12505-12514, https://doi.org/10.1109/jiot.2021.3137331.
Ragone, F., J. Wouters and F. Bouchet (2017), "Computation of extreme heat waves in BI climate models using a large deviation algorithm", Proceedings of the National Academy of Sciences, Vol. 115/1, pp. 24-29, https://doi.org/10.1073/pnas.1712645115.
Razavi, S. and M. Rahbari (2020), "Understanding Reactions to Natural Disasters: a Text [14] Mining Approach to Analyze Social Media Content", 2020 Seventh International Conference on Social Networks Analysis, Management and Security (SNAMS), https://doi.org/10.1109/snams52053.2020.9336570.
Ren, S. et al. (2017), "Faster R-CNN: Towards Real-Time Object Detection with Region [53] Proposal Networks", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 39/6, pp. 1137-1149, https://doi.org/10.1109/tpami.2016.2577031.
Resch, B., F. Uslánder and C. Havas (2017), "Combining machine-learning topic models and [19] spatiotemporal analysis of social media data for disaster footprint and damage assessment", Cartography and Geographic Information Science, Vol. 45/4, pp. 362-376, https://doi.org/10.1080/15230406.2017.1356242.
Reynard, D. and M. Shirgaokar (2019), "Harnessing the power of machine learning: Can [20] Twitter data be useful in guiding resource allocation decisions during a natural disaster?", Transportation Research Part D: Transport and Environment, Vol. 77, рр. 449-463, https://doi.org/10.1016/j.trd.2019.03.002.
Riccardi, M. (2016), "The power of crowdsourcing in disaster response operations", International Journal of Disaster Risk Reduction, Vol. 20, pp. 123-128, https://doi.org/10.1016/j.ijdrr.2016.11.001.
Ronneberger, O., P. Fischer and T. Brox (2015), "U-Net: Convolutional Networks for Biomedical Image Segmentation", in Lecture Notes in Computer Science, Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015, Springer International Publishing, Cham, https://doi.org/10.1007/978-3-319-24574-4 28.
Schumann, G. (ed.) (2022), "Utilization of social media in floods assessment using data mining techniques", PLOS ONE, Vol. 17/4, р. e0267079, https://doi.org/10.1371/journal.pone.0267079.
Shelhamer, E., J. Long and T. Darrell (2017), "Fully Convolutional Networks for Semantic Segmentation", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 39/4, pp. 640-651, https://doi.org/10.1109/tpami.2016.2572683.
Song, X. et al. (2014), "Prediction of human emergency behavior and their mobility following large-scale disaster", Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, https://doi.org/10.1145/2623330.2623628.
Stiller, D. et al. (2019), "Large-scale building extraction in very high-resolution aerial imagery using Mask R-CNN", 2019 Joint Urban Remote Sensing Event (JURSE), pp. 1-4, https://doi.org/10.1109/jurse.2019.8808977.
Sun, W., P. Bocchini and B. Davison (2020), "Applications of artificial intelligence for disaster management", Natural Hazards, Vol. 103/3, pp. 2631-2689, https://doi.org/10.1007/s11069-020-04124-3.
Tapia-Mendez, E. et al. (2023), "Deep Learning-Based Method for Classification and Ripeness Assessment of Fruits and Vegetables", Applied Sciences, Vol. 13/22, p. 12504, https://doi.org/10.3390/app132212504.
Trepekli, K. et al. (2022), "UAV-borne, LIDAR-based elevation modelling: a method for improving local-scale urban flood risk assessment", Natural Hazards, Vol. 113/1, pp. 423451, https://doi.org/10.1007/s11069-022-05308-9.
Van Ackere et al. (2019), "A Review of the Internet of Floods: Near Real-Time Detection of a Flood Event and lts Impact", Water, Vol. 11/11, p. 2275, https://doi.org/10.3390/w11112275.
World Bank/Global Facility for Disaster Reduction and Recovery (2023), Global Rapid PostDisaster Damage Estimation (GRADE) Report: February 6, 2023 Kahramanmaras Earthquakes - Türkiye Report, World Bank, https://doi.org/10.1596/39468.
Wouters, L. et al. (2021), "Improving flood damage assessments in data-scarce areas by retrieval of building characteristics through UAV image segmentation and machine learning - a case study of the 2019 floods in southern Malawi", Natural Hazards and Earth System Sciences, Vol. 21/10, pp. 3199-3218, https://doi.org/10.5194/nhess-21-3199-2021.
Wu, C. et al. (2021), "Building Damage Detection Using U-Net with Attention Mechanism from [62] Pre- and Post-Disaster Remote Sensing Datasets", Remote Sensing, Vol. 13/5, p. 905, https://doi.org/10.3390/rs 13050905.
Xie, В. et al. (2020), "Machine Learning on Satellite Radar Images to Estimate Damages After [8] Natural Disasters", Proceedings of the 28th International Conference on Advances in Geographic Information Systems, https://doi.org/10.1145/3397536.3422349.
Xing, Z. et al. (2021), "Crowdsourced social media and mobile phone signaling data for [15] disaster impact assessment: À case study of the 8.8 Jiuzhaigou earthquake", International Journal of Disaster Risk Reduction, Vol. 58, p. 102200, https://doi.org/10.1016/j.ijdrr.2021.102200.
Yabe, T., Y. Zhang and S. Ukkusuri (2020), "Quantifying the economic impact of disasters on [26] businesses using human mobility data: a Bayesian causal inference approach", EPJ Data Science, Vol. 9/1, https://doi.org/10.1140/epjds/s13688-020-00255-6.
Yamazaki, F., W. Liu and К. Horie (2022), "Use of Multi-Temporal LIDAR Data to Extract [47] Collapsed Buildings and to Monitor Their Removal Process after the 2016 Kumamoto Earthquake", Remote Sensing, Vol. 14/23, р. 5970, https://doi.org/10.3390/rs14235970.
Yang, H. et al. (2018), "Building Extraction at Scale Using Convolutional Neural Network: [48] Mapping of the United States", IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 11/8, pp. 2600-2614, https://doi.org/10.1109/jstars.2018.2835377.
Yu, M., С. Yang and Y. Li (2018), "Big Data in Natural Disaster Management: A Review", [9] Geosciences, Vol. 8/5, р. 165, https://doi.org/10.3390/geosciences8050165.
Zhang, D. et al. (2019), "CrowdLearn: A Crowd-Al Hybrid System for Deep Learning-based [22] Damage Assessment Applications", 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), https://doi.org/10.1109/icdcs.2019.00123.
Zhang, S., K. Yang and Y. Cao (2019), "GIS-Based Rapid Disaster Loss Assessment for [31] Earthquakes", IEEE Access, Vol. 7, pp. 6129-6139, https://doi.org/10.1109/access.2018.2889918.
Zhu, M. et al. (2019), "Multi-UAV Rapid-Assessment Task-Assignment Problem in a Post- [44] Earthquake Scenario", IEEE Access, Vol. 7, pp. 74542-74557, https://doi.org/10.1109/access.2019.2920736.
Zweglinski, T. (2020), "The Use of Drones in Disaster Aerial Needs Reconnaissance and [41] Damage Assessment - Three-Dimensional Modeling and Orthophoto Map Study", Sustainability, Vol. 12/15, р. 6080, https://doi.org/10.3390/su12156080.
Copyright Organisation for Economic Cooperation and Development (OECD) 2025
