Content area
This study develops a methodology to create detailed visual Digital Twins of large‐scale structures with their realistic damages detected from visual inspection or nondestructive testing. The methodology is demonstrated with a transition piece of an offshore wind turbine and a composite rotor blade, with surface paint damage and subsurface delamination damage, respectively. Artificial Intelligence and color threshold segmentation are used to classify and localize damages from optical images taken by drones. These damages are digitalized and mapped to a 3D geometry reconstruction of the large‐scale structure or a CAD model of the structure. To map the images from 2D to 3D, metadata information is combined with the geo placement of the large‐scale structure's 3D model. The 3D model can here both be a CAD model of the structure or a 3D reconstruction based on photogrammetry. After mapping the damage, the Digital Twin gives an accurate representation of the structure. The location, shape, and size of the damage are visible on the Digital Twin. The demonstrated methodology can be applied to industrial sectors such as wind energy, the oil and gas industry, marine and aerospace to facilitate asset management.
INTRODUCTION
The world is currently in the fourth industrial revolution, this revolution has been taking place since the turn of the millennium. Industry 4.0 combines the physical world with the digital and virtual worlds. This can lead to more efficient production, smarter products, and more efficient use of resources. The Digital Twin is an important element of Industry 4.0. Digital Twins are often used together with the internet of things (IoT) and artificialiIntelligence (AI) is frequently an integrated part of the data processing section. AI can be both machine learning and deep learning. The application of Digital Twins has been reported in several different areas including healthcare,1,2 smart cities,3,4 and manufacturing, see the review paper.5 The latter is the area where most Digital Twin applications have been reported. There are many examples of small-scale Digital Twins but a lack of very large-scale Digital Twin projects in the literature. The reason is a lack of specific domain knowledge of how successful upscaling is done.
References 6,7 discuss the application of Digital Twins in the wind power industry. A cloud-based Digital Twin monitoring and analysis system is discussed in Reference 7. A working prototype Digital Twin of a wind farm is presented where data are fed to the model and both technical and business parameters are generated for the wind farm. This can be used to evaluate the wind farm both technically and economically. A case study for wind farm use and smart grid energy consumption is discussed in Reference 6.
A key enabling technology in the wind energy industry and many other industries is digitalization. Sensors placed on wind turbines will generate large amounts of data that when combined with other digital technologies such as Big Data, IoT, and Cloud Computing will open up many new possibilities. The data from sensors and wind turbine inspection measurements can be combined in a Digital Twin. Many different definitions of a Digital Twin are used in the literature but we will use the definition of Bolton et al.,8 where a Digital Twin is a ‘dynamic virtual representation of a physical object or system across its lifecycle, using real-time data to enable understanding, learning and reasoning’. A Digital Twin can be both data-driven and based on physical models. Sensor data from the operating access for example a wind turbine are used to update the state of the Digital Twin. Advanced numerical tools must be further developed to better understand sensor signals and inspection data and simulate the consequence of structural and material deterioration. Some examples can be found in References 9,10. The structural performance of each component of the wind turbine running in the field can be evaluated using its counterpart Digital Twin in a control center.
The Digital Twin for the wind turbine can be used as a prognostic health management system. This will lead to a change in the maintenance strategies from fixed schedules and intervals towards predictive maintenance. The Digital Twin will make it possible to economically find the balance between wind turbine utilization and lifetime reserve and maintenance. References 11,12 discuss the use of prognostic health management systems to predict the occurrences of failures in components and thereby reduce the unexpected downtimes of complex systems. The systems use condition monitoring and AI tools in the collection and processing of a huge amount of data. The techniques are used on case studies.
Currently, wind turbines and their components are often inspected manually using lifts, and sometimes it is needed that a person climbs onto the structure. This type of inspection is expensive, time-consuming, and can potentially also be dangerous. Drones can be used to collect a huge amount of images. The drone can be programmed to autonomously follow a predetermined route and acquire images at specific waypoints. This approach significantly reduces the time needed for the inspection. It is very time-consuming for a person and sometimes also inaccurate to go through massive amounts of images and make reports where for example the defects found in the images are documented. Combining AI with the Digital Twin concept produces a more efficient approach. Deep learning can find the defects in the images in near real-time and is therefore much faster than a human. Defects from multiple images can be categorized by AI and the 3D location can be calculated and mapped onto the Digital Twin. One specific defect that is observed in several images can then be seen on the Digital Twin from different angles and distances. AI is also capable of finding defects too small to be observed by humans. Reference 13 provides an example of wind turbine surface damage detection using AI where drone inspection was used. A modified faster R-CNN applied on video data is discussed in References 14,15 for the detection of multiple types of damages. The data is collected on an autonomous UAV that uses an ultrasonic beacon system in GPS-denied regions. A semantic damage detection network (SDDNet) is discussed in Reference 16. This network is after careful training capable of negating complex backgrounds in images and successfully segmenting cracks in real time. A vision-based approach using a Convolutional Neural Network (CNN) for detecting cracks in concrete images was proposed in Reference 17. In Reference 18 segmented cracks found using a Convolutional Neural Network (CNN) and a Fully Convolutional Network (FCN) are used to generate a crack map which is then projected onto a 3D model found with the use of a photogrammetric technique.
This study builds on the findings reported in Reference 19, in which a transition piece was selected as an example of a large-scale structure. In the current study, both a transition piece and a rotor blade are used to demonstrate newly developed functionalities. As such, novel contributions of this study to the current knowledge base are:
- Image segmentation to isolate the structure of interest from a complex image background. In the previous study,19 AI can occasionally find damage in the images of distant structures in the background and report damages that do not belong to the structure of interest. These damages should not be mapped to the Digital Twin of the structure in the center of the images.
- Real geometric features of defects/damages mapped to the Digital Twin. To achieve this, image segmentation is performed on the small section of the total image that is inside the bounding box calculated by a deep learning algorithm. The pixels from the image segmentation, representing realistic damage shape and size, are mapped to the structure.
- A more precise new algorithm using surface normals for mapping the damage pixels is presented in this study, enabling a realistic representation of unique defects/damages of the structure.
This paper is organized as follows. Section 2 presents the visual Digital Twin and how it has been used in the study. Section 3 describes in detail the methodology of the visual Digital Twin. This includes algorithms for pre-processing drone images and algorithms that map damages found in images to a 3D model of a structure. The image pre-processing and 2D to 3D damage mapping techniques are applied in Section 4 which demonstrates the techniques on a set of images from one drone flight using a transition piece as demonstration. Section 5 applies the mapping algorithms to a composite wind turbine blade where subsurface delamination damage is visually inspected. The main conclusions of the study are summarized in Section 6.
THE VISUAL DIGITAL TWIN
The arrival of powerful and cost-effective drones has opened up many new applications. Drone inspection of very large structures is an example of a new application type that gives better results and is more cost-effective than previously used methods, see References 20,21. Wind turbine transition pieces are currently inspected at the factory for different types of damages. This is accomplished by an inspector from a crane. The objectives of this study are to develop and demonstrate fully automatic intelligent drone inspections based on RGB images and to find and recognize paint damages and defects on wind turbine transition pieces (TP), see References 22,23. The TP is a tubular structure made out of high-grade offshore steel. The TP is the second part of a wind turbine generator, which is directly connected to the monopile foundation and the tower. The starting point is a description of the physical position of the TP in a georeferenced coordinate system. A detailed meshed CAD model of the TP is moved to the same georeferenced coordinate system used during drone flights. This makes it easier to compare it to the reconstructed 3D model. An AI algorithm is applied to the RGB images to detect and classify paint imperfections and damages, respectively. The RGB images from the drone flight are used to generate a 3D model of the TP. This reconstruction model is produced with the use of photogrammetry and is an important part of the digital twin. Information from the AI algorithm is used in the pre-processing of the images. The 3D Digital Twin is updated with the positions, types, and sizes of the identified paint imperfections and damages identified in the images by AI. These different processing steps are summarized in the flowchart shown in Figure 1 and explained in detail in the next section. Information from the Digital Twin is used to update the physical structure. Information on the position and type of paint damage from the Digital Twin can be used to determine if maintenance is needed on the physical structure. After the offshore installation of the TPs, drones can be used to inspect the transition pieces at regular time intervals. The Digital Twin will then be updated based on the new images, providing essential information for asset management. Green boxes are used in the flowchart to indicate the new algorithms and processing steps compared to the Reference 19. The image segmentation steps and the 2D to 3D mapping technique that uses the projection along the surface normals method are all introduced in this paper. A major difference in the mapping algorithm is that the paint damage pixels will be mapped to the tower instead of the bounding box corners, which was the case in Reference 19.
[IMAGE OMITTED. SEE PDF]
METHODOLOGY
The proposed Digital Twin presents the observed damages on a transition piece. Several steps have been taken to make a very detailed visual Digital Twin. The damages are found in images collected during drone flights with the use of artificial intelligence. The images were collected with a DJI zenmouse P1 45M pixels camera onboard a DJI M300 RTK drone. A Real-time kinematic positioning (RTK) solution was used to improve the precision of the GPS information saved in the meta-data section of the images. High accuracy positions are needed especially for the photogrammetry reconstruction. The algorithm You Only Look Once (YOLO) version 5, see References 24–26 has been selected due to its benefits such as high classification accuracy and real-time throughput in our specific case. The images only require to pass one time through the network unlike other object detector algorithms, hence the name of the algorithm. YOLO reasons at the level of the overall image, instead of successively examining many regions. More than 2000 images, labeled with 10 different damage categories, were used to train the convolutional neural network. From these images, even more images were made by changing the contrast and light, etc. in the images. This was done to improve the resilience of the network towards the changes in the light conditions caused by changing weather seasons. The process of labeling the images was performed with a purpose-built semiautomatic tool. The output from the YOLO algorithm is a bounding box containing the damage together with confidence levels, see Reference 27. Figure 2 below shows the original drone image together with the content of the bounding box for four cases where the YOLO algorithm has identified some kind of defect or damage. The location of the bounding box is also shown in the drone image. The AI algorithm currently divides the defects into 10 different categories. These categories include rust, scuffs, and several paint damage types. Only the paint damage categories are present in the presented data set. A color threshold algorithm, see References 28, 29 is applied to the image sections in all the bounding boxes. The black color in Figure 2A–C shows the segmented pixels. The segmented pixels represent the paint damage in the image. These pixels are mapped to the reconstructed model or a 3D CAD model that has been placed in the same georeferenced coordinate system as the drone images. The segmentation algorithm can distinguish between diffuse reflection and paint damage, this can be observed in Figure 2B where diffuse reflection is noticed in the upper half of the image. Shadows in the images are also not a problem for the segmentation algorithm as seen in Figure 2C where shadows can be seen in the upper half of the bounding box image.
[IMAGE OMITTED. SEE PDF]
AI sometimes falsely finds paint damage on other surfaces such as water or other TPs. An example of this is shown in Figure 3A below. Here the small bounding box is placed on the TP of interest while the larger bounding box is placed on a distant TP. Only the content of the bounding box on the center TP should be mapped to the 3D model of the tower. Masking the images can solve this problem if the TPs are not placed too close to each other, this condition is always met when the TPs are placed in the wind farm. The images with paint damage have been masked using three steps. Segmentation using the color threshold approach is applied to the image. The largest coherent area (the TP) in the calculated binary mask is used to generate a new binary mask where all the pixels corresponding to the TP is 1 and the rest is 0. This binary mask is then applied to the original images.
[IMAGE OMITTED. SEE PDF]
The drone flies in circles around the tower at several heights above the ground so the physical asset here the TP will always be the largest item in the images. The photogrammetry technique requires that this is the case. If this is not the case the image will be discarded. If the asset has different colors then the color threshold approach cannot be used. A segmentation candidate could in this case be a graph-based segmentation technique like lazy-snapping which makes it possible to segment an image into foreground and background regions. The foreground is the transition piece. Different segmentation techniques should be selected based on the specific circumstances. However, the color threshold segmentation technique is very suitable for paint damage detection cases because only objects with a specific color are of interest and it is therefore not important that objects with other colors get removed using the color threshold segmentation technique.
The original image with two YOLO bounding boxes is shown in Figure 3A while the masked image is shown in Figure 3B. The first bounding box is placed in the dark area of the masked image and the corresponding pixels will not be mapped to the TP. Only the correctly placed bounding box pixels will be mapped to the TP. This is done using an approach discussed in the following section. All images with paint damage should be masked using this approach. Depending on the light conditions during the capture of the drone images it can be necessary to calculate and use a few different binary masks on the original images.
A method that maps the paint damages found in the 2D images using a combined AI and color threshold approach onto a large-scale structure, in this case, a transition piece, is explained in Reference 19. This method has also been applied in this study. The method combines the information found in the metadata of the images with the position of the known 3D model of the large-scale structure. This 3D model can either be a photogrammetry reconstruction, see References 30,31 of the large-scale structure or a CAD model that has been moved to the correct position in the georeferenced coordinate system used in the images.
The purpose of mapping the damages onto the large-scale structure is to make a detailed visual Digital Twin that gives an overview of the defects/damages in 3D thus making it possible to identify any systematics in the positions of the damages. This information can be used in the optimization of production. The YOLO algorithm identifies together with the color threshold algorithm the pixels in an image that correspond to paint defects/damages. This 2D information is mapped to the 3D CAD or reconstructed model using the equation below
The three different coordinate systems used to perform the mapping can be seen in Figure 4A below. The world coordinate system (x, y, z) of the 3D object is shown together with the colored orthogonal basis of the local camera coordinate system. The x, y, and z-axis have the colors blue, red, and black respectively. The third coordinate system is generated when the local camera coordinate system is moved to the center of the camera sensor. (x', y') is the pixel value of the damage in the image while (cposx, cposy, cposz) is the position of the camera in world coordinates. The distance from the camera to the TP in the direction in which the camera is pointing is given by DCT. Here cposx, cposy, and cposz stand for the camera position x, y, and z coordinates respectively while DCT is the distance from the camera to the tower. fk and fl is the focal length of the camera expressed in pixels along the horizontal and vertical direction of the camera sensor, these two parameters are the same for many cameras. cx and cy measure the number of pixels from the upper left corner to the center of the camera sensor. The Pitch, Roll, and Yaw of the drone (and camera) are abbreviated with the letters P, R, and Y in Equation (1). The yaw angle determines the direction in which the camera is pointing. This direction is given by the green ray in the Figure. The first matrix in Equation (1) is the rotation matrix that transforms local camera coordinates into world coordinates. A more detailed explanation of Equation (1) can be found in Reference 19.
[IMAGE OMITTED. SEE PDF]
Cameras in drones are often not of high quality and the sizes of the lenses are small. Distortions can therefore often be seen in drone images. The effect of radial and tangential lens distortion can be added to the equation that maps the 2D pixels of paint damage to the 3D model. The dimensionless pixel values dlx and dly used in Equation (1) are given by
These parameters can be used in the calculation of the radial (xrdis, yrdis) and tangential (xtdis, ytdis) distortions:
The camera position and viewing angle can be extracted from the Metadata in the image files. The precise location of the TP is also known, this means that the distance DCT from the camera to the TP needed in the equations can be calculated. The 3D world coordinates for all the paint defect/damage pixels in an image calculated using Equations (1)–(3) are represented by the red points and corresponding surface in Figure 4A,B. These red points are in general placed behind, in front of, or on the TP.
Figure 4B shows the normals with the blue color to the surface given by the red points. The projection of the red points in the direction of the local normal onto the TP surface results in the green points. Both positive and negative normals to the surface must be calculated because the red points can be both in front and behind the large-scale structure. The mapping of the paint damage pixels found in one of the drone images onto the 3D model is given by these green points. The calculated projection points can be anywhere on the mesh, hence also between the vertices of the mesh. A faster but less accurate approach is to find the closest point on the TP for a given point on the red surface. The algorithm is less precise because only the vertices in the mesh can be used but the method can still give very good results if the resolution of the mesh is high and the details of the 2D points that are mapped are relatively coarse. This is illustrated in Figure 5 below where a not-too-detailed drawing of a “blue dog” is mapped using this nearest point method onto the TP. A small drone yaw angle and a placement of the drawing close to the center of the mapped image also contribute to the good result.
[IMAGE OMITTED. SEE PDF]
The main steps in the mapping method can be summarized into the following processes
- The YOLO algorithm calculates the four image pixel coordinates for a bounding box that contains the area with the surface damage.
- Color threshold segmentation is then applied to the image section in the bounding box.
- The 2D image pixel coordinates corresponding to the shape of the damage are saved together with the three RGB color values in the original image.
- Equation (1) is then used on every damage pixel in the image, giving a surface expressed in 3D world coordinates. These coordinates however need to be corrected. This can be done in two ways. The easiest is to project all the points on the surface onto the TP using an approach of finding the smallest distance between the point in the surface and the TP. Another more precise approach is to calculate the surface normal to all the points on the surface and then project the points onto the TP along these normals. Here a projection along both the positive and negative normals to the surface needs to be evaluated because the surface of points calculated using Equation (1) can intersect the walls of the tower.
- The projected points on the TP can then be colored using an arbitrary color or the saved color from the original image.
The accuracy of the mapping from 2D to 3D is dependent on the geometry of the large-scale structure and the position of the drone relative to the structure. If the image of the camera sensor is tangential to the tower (the red surfaces in the figures above) then the mapping will be very close to perfect when the local surface of the large-scale structure can be approximated by a plane surface. The local curvature of a plane surface is infinite, decreasing the large structure's local curvature will decrease the mapping accuracy because the 2D image pixels will get compressed when they are mapped onto the tower. It is in general not possible to perfectly map a 2D image onto a structure with a relatively small local curvature. For the same reasons the many types of 2D world country maps each have their types of errors. In the case of the transition pieces most of the surfaces of interest will have a very large local radius given a high mapping accuracy. Using the correct values of the digital camera's focal length and sensor dimensions in the equations above ensures that the sizes and distances in the images will get mapped correctly to the large-scale structure.
DEMONSTRATION OF MAPPING SURFACE PAINT DAMAGE ON A TRANSITION PIECE
Using the techniques discussed in the previous section it is possible to map all the paint damages found in images acquired during a drone flight. The paint damage pixels are found using the YOLO and color threshold algorithms. The mapping of the paint damage pixels is here performed using the projection along surface normals method. The results are presented in Figure 6A below. A CAD model of the TP moved to the geo-referenced coordinate system is used here. It is seen that the majority of the paint damages are concentrated on the lower section of the TP. This information can be used in the optimization of TP production. The mapped paint damages are divided into clusters in Figure 6B. This is done to get an overview of the areas where the damages are located. If the smallest distance between paint damages in different clusters is chosen to be 75 cm the total number of clusters becomes seven. The points in each of the clusters have a unique color. The surface areas for the clusters are seen in the figure together with image numbers. This makes it possible to locate the images that have paint damage in a specific cluster. The largest cluster has paint damage that can be seen in 8 different images. It is seen that the paint damage surface areas for most of the clusters are as expected small. Cluster 1 has however a large surface area because many small damages are scattered around this area. The paint damages in Figure 2A–D will all be mapped to this cluster. The unit for the surface areas is a meter. Figure 6C shows the 3D reconstructed model based on all the images captured during the flight. The photogrammetry software ContextCapture from Bentley Institute, see32 has been used to generate the 3D reconstruction model. The quality of the 3D geometric reconstruction can be improved due to a limited number of images that is, only 445 images were available in this inspection. The number of tie-points calculated and used in ContextCapture is relatively low on the cylindrical part of the TP resulting in the poor 3D reconstruction of the cylindrical part of the TP. The low number of tie-points is caused by very little texture in these sections of the images, large areas with only slight color differences can be seen in the images. The number of tie points on the TP platforms is higher leading to a better 3D reconstruction. The blue points in Figure 6C show the mapped paint damages. When the Digital Twin is applied in an operational environment the paint damages from the images will normally be mapped to a 3D CAD model of the TP. The reason for this is the long time it takes to calculate a 3D reconstruction of the TP with the use of photogrammetry, more than 8 h on an HP workstation, and the images needed to detect paint damage are often not the same type of images that can be used in the 3D reconstruction. These images must have the content of the TP but also some parts of the surrounding areas. The total number of needed images becomes very large and the acquisition therefore a time-consuming process.
[IMAGE OMITTED. SEE PDF]
The mapping method where the paint damage pixels are projected along surface normals is, as stated above, more accurate than the closest point method. The former method is needed when the paint damage is contained in a small area. The closest point approach would require a very high mesh resolution of the TP. Figure 7 shows an example where the paint damage has a small surface area. Figure 7A shows the content of the bounding box calculated using the YOLO algorithm. The paint damage has a light yellow color and is placed near the center of the bounding box. Figure 7B shows the output of the color threshold segmentation algorithm. The result of mapping the black paint damage pixels to the TP is presented in Figure 8A.
[IMAGE OMITTED. SEE PDF]
[IMAGE OMITTED. SEE PDF]
The corresponding paint damage in the original drone image file is shown as a reference in Figure 8B. The paint damage is represented by the blue points in both of these figures. The surface area of the mapped paint damage is approximately 3.1 cm2. This particular paint damage is only a small part of the damages that make up cluster 5 on the TP and the same damage shape can be found in different places scattered around the upper parts of the TP, see Figure 6B. It is seen from the figures that the position, shape, and size, relative to the TP and the surroundings, is the same for the paint damage shown in Figure 8A,B, this must mean that the algorithms used for finding and mapping the paint damages presented here are accurate. The accuracy depends on the geometry of the large-scale structure and the mapping algorithm as discussed in Section 3. The segmentation algorithm and its settings are also important for the model's accuracy. The specific settings for the color threshold segmentation algorithm determine the shape of the paint damage that is mapped to the tower. The shapes and sizes of the paint damages were estimated from a large number of images of the transition piece. This is a technique that is currently in use by many manufacturers. Great care was taken in the selection of the color threshold settings to ensure that the paint damage shapes and sizes were very close to the values found during manual image inspection.
DEMONSTRATION OF MAPPING DELAMINATION DAMAGE ON ROTOR BLADE
In this demonstration, a composite rotor blade structurally tested in a previous study33 is used to map delamination damage in a 3D geometry model generated for finite element simulation. The blade was subject to cyclic loading in a full-scale structural testing laboratory, see Figure 9. Delamination damage occurred inside the loading carrying spar cap laminates and can be visually inspected. Detailed information on the experiment and damage detection is not presented here but can be found in Reference 33. The current study only uses the damage inspection images and maps them to a numerical model and creates a visual digital twin.
[IMAGE OMITTED. SEE PDF]
Images of the areas with damages can be regularly mapped to a CAD model of the wind blade and the updated digital twin can be used to accurately estimate the damage progression. In the final version of the tool the image capturing of the damage, mapping of the images onto the model, and finally the estimation of the sizes of the defects will be done automatically. Figure 10A depicts a segment of the wind blade in Figure 9 after the fatigue experiment.
[IMAGE OMITTED. SEE PDF]
Using the mapping algorithms presented in Section 4 four-color images of these delamination areas have been mapped onto the blade CAD model. The red, green blue color information for each of the image pixels are here used in the mapping process. From this digital twin model, the damage growth can be calculated in a virtual environment, allowing regular and efficient assessment of the structural integrity of the physical twin by updating the digital twin with new damage inspection images.
CONCLUDING REMARKS
This study has developed and demonstrated a methodology to create a visual 3D Digital Twin of large-scale structures mapped with precise defects/damage detected in inspection images. The visual Digital Twin allows a detailed representation of the size, location, and shape of unique damage found in the structure. The methodology is demonstrated on a transition piece and a rotor blade of wind turbines to showcase the new functionalities beyond the current state-of-the-art technologies, making it possible to perform a 3D mapping of small damages with the correct shape and size on large-scale structures. The image processing techniques together with the new projection along the surface normals method result in an accurate Digital Twin model. It should be noted that the current method relies on color thresholding and works well with structures with uniform colors and without intricate details surrounding them. The usability of the Digital Twin could be strengthened by adding more layers of data from other inspection techniques, such as thermography, and data obtained from sensors installed in the structure, such as strain gauges and accelerometers. The Digital Twin of the wind turbine blade under testing is expected to be further developed into a more automated tool in a future project.
AUTHOR CONTRIBUTIONS
Hans-Henrik von Benzon: Data curation (lead); formal analysis (lead); investigation (lead); methodology (lead); writing – original draft (lead). Xiao Chen: Conceptualization (lead); funding acquisition (lead); methodology (equal); project administration (lead); resources (lead); supervision (lead); visualization (equal); writing – original draft (supporting); writing – review and editing (lead).
ACKNOWLEDGMENTS
This study is partly supported by the QualiDrone Project (Intelligent, autonomous drone inspection of large structures within the energy industry, 64020-2099), the RELIABLADE project (Improving Blade Reliability through Application of Digital Twins over Entire Life Cycle, 64018-0068), and the AQUADA-GO project (Automated blade damage detection and near real-time evaluation for operational offshore wind turbines, 64022-1025, through the Energy Technology Development and Demonstration Program (EUDP) of Denmark.
CONFLICT OF INTEREST STATEMENT
The authors declare that they have no known potential sources of conflict of interest including competing financial interests or personal relationships that could have appeared to influence the work in this study.
PEER REVIEW
The peer review history for this article is available at .
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
Liu Y, Zhang L, Yang Y, et al. A novel cloud‐based framework for the elderly healthcare services using digital twin. IEEE Access. 2019;7:49088‐49101.
Gahlot S, Reddy SRN, Kumar D. Review of smart health monitoring approach with survey analysis and proposed framework. IEEE Internet Things J. 2019;6(2):2116‐2127.
Mohammadi N, Taylor JE. Smart city digital twins. Proceedings of the IEEE Symposium Series on Computational Intelligence (SSCI). IEEE; 2017:1‐5.
Ruohomaki T, Airaksinen E, Huuska P, Kesaniemi O, Martikka M, Suomisto J. Smart city platform enabling digital twin. Proceedings of the International Conference on Intelligent Systems (IS). IEEE; 2018:155‐161.
Fuller A, Fan Z, Day C, Barlow C. Digital twin: enabling technologies, challenges and open research. IEEE Access. 2020;8:108952‐108971.
Sivalingam K, Sepulveda M, Spring M, Davies P. A review and methodology development for remaining useful life prediction of offshore fixed and floating wind turbine power converter with digital twin technology perspective. Proceedings of the 2nd International Conference on Green Energy and Applications (ICGEA). IEEE; 2018:197‐204.
Pargmann H, Euhausen D, Faber R. Intelligent big data processing for wind farm monitoring and analysis based on cloud‐technologies and digital twins: a quantitative approach. Proceedings of the IEEE 3rd International Conference on Cloud Computing and Big Data Analysis (ICCCBDA). IEEE; 2018:233‐237.
Bolton RN, McColl‐Kennedy JR, Cheung L, et al. Customer experience challenges: bringing together digital, physical and social realms. J Serv Manag. 2018;29(5):776‐808. doi:
Eder MA, Chen X. FASTIGUE: a computationally efficient approach for simulating discrete fatigue crack growth in large‐scale structures. Eng Fract Mech. 2020;233: [eLocator: 107075]. doi:
Chen X, Eder MA. A critical review of damage and failure of composite wind turbine blade structures. IOP Conference Series: Materials Science and Engineering. 2020;942(1): [eLocator: 12001]. doi:
Calabrese F, Regattieri A, Bortolini M, Gamberi M, Pilati F. Predictive maintenance: a novel framework for a data‐driven, semi‐supervised, and partially online prognostic health management application in industries. Appl Sci. 2021;11(8): [eLocator: 3380]. doi:
Calabrese F, Regattieri A, Botti L, Galizia FG. Prognostic health management of production systems. New proposed approach and experimental evidences. Proc Manuf. 2019;39:260‐269. doi:
Shihavuddin ASM, Chen X, Fedorov V, et al. Wind turbine surface damage detection by deep learning aided drone inspection analysis. Energies. 2019;12(4):676. doi:
Ali R, Kang D, Suh G, Cha Y‐J. Real‐time multiple damage mapping using autonomous UAV and deep faster region‐based neural networks for GPS‐denied structures. Autom Construct. 2021;130: [eLocator: 103831]. doi:
Kang D, Cha Y‐J. Autonomous UAVs for structural health monitoring using deep learning and an ultrasonic Beacon system with geo‐tagging. Comput Aided Civil Inf Eng. 2018;33:885‐902. doi:
Choi W, Cha Y‐J. SDDNet: real‐time crack segmentation. IEEE Trans Ind Electron. 2020;67(9):8016‐8025. doi:
Cha Y‐J, Choi W, Büyüköztürk O. Deep learning‐based crack damage detection using convolutional neural networks. Comput Aided Civ Inf Eng. 2017;32:361‐378.
Chaiyasarn K, Buatik A, Mohamad H, Zhou M, Kongsilp S, Poovarodom N. Integrated pixel‐level CNN‐FCN crack detection via photogrammetric 3D texture mapping of concrete structures. Automation in Construction. 2022;140: [eLocator: 104388]. doi:
Benzon H‐H, Chen X, Belcher L, Castro O, Branner K, Smit J. An operational image‐based digital twin for large‐scale structures. Appl Sci. 2022;12:3216. doi:
Mandirola M, Casarotti C, Peloso S, Lanese I, Brunesi E, Senaldi I. Use of UAS for damage inspection and assessment of bridge infrastructures. Int J Disaster Risk Reduct. 2022;72: [eLocator: 102824]. doi:
Morgenthal G, Hallermann N, Kersten J, et al. Framework for automated UAS‐based structural condition assessment of bridges. Autom Constr. 2019;97:77‐95. doi:
Hammad AWA, da Costa BBF, Soares CAP, Haddad AN. The use of unmanned aerial vehicles for dynamic site layout planning in large‐scale construction projects. Buildings. 2021;11:602. doi:
Kong K, Dyer K, Payne C, Hamerton I, Weaver PM. Progress and trends in damage detection methods, maintenance, and data‐driven monitoring of wind turbine blades – a review. Renew Energy Focus. 2022;44:390‐412. doi:
Bochkovskiy A, Wang C‐Y, Liao H‐YM. YOLOv4: optimal speed and accuracy of object detection. ArXiv:2004.10934 [Cs, Eess]. 2020 https://arxiv.org/abs/2004.10934
Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: unified, real‐time object detection. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE; 2016:779‐788. doi:
Zhao S, Kang F, Li J. Concrete dam damage detection and localisation Based on YOLOv5s‐HSC and photogrammetric 3D reconstruction. Automation in Construction. 2022;143: [eLocator: 104555]. doi:
Jocher G, Stoken A, Chaurasia A, et al. Ultralytics/yolov5: V4.0‐nn.SiLU() Activations, Weights & Biases Logging, PyTorch Hub Integration (v4.0). accessed on 13 February. 2021 https://github.com/ultralytics/yolov5
Cheng HD, Jiang XH, Sun Y, Wang J. Color image segmentation: advances and prospects. Pattern Recogn. 2001;34(12):2259‐2281. doi:
Kurugollu F, Sankur B, Harmanci AE. Color image segmentation using histogram multithresholding and fusion. Image Vis Comput. 2001;19(13):915‐928. doi:
Baqersad J, Poozesh P, Niezrecki C, Avitabile P. Photogrammetry and optical methods in structural dynamics—a review. Mech Syst Signal Process. 2017;86:17‐34. doi:
Remondino F, Barazzetti L, Nex F, Scaioni M, Sarazzi D. UAV photogrammetry for mapping and 3D modeling—current status and future perspectives. Int Arch Photogramm Remote Sens Spat Inf Sci. 2011;38(1):C22‐C31.
Pascal M.4D Digital Context for Digital Twins. Bentley Institute Inc; 2020.
Chen X, Semenov S, McGugan M, et al. Fatigue testing of a 14.3 m composite blade embedded with artificial defects—damage growth and structural health monitoring. Compos Part A: Appl Sci Manuf. 2021;140: [eLocator: 106189]. doi:
© 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.