1. Introduction
Since the 1940s, bridge failures have been a major concern to structural engineers and other environmental experts as they cause a huge loss of resources [1,2], hence the need to periodically investigate bridge conditions to ascertain their strength. This is to avoid failures within the serviceability period of the structures. There exist several reasons why bridges fail, which include infrastructure issues, floods, accidents, construction incidents, design flaws and manufacturing errors, fires, and earthquakes [3].
For steel bridges, bolt-loosening usually leads to bolt joint failures, which consequently causes collapse [4,5]. Factory errors during bolt manufacturing could also be responsible for early damage when in use [6]. The Federal Highway Administration (FHWA) proposed physical visual supervision of bridges to avoid bridge collapse due to bolt-loosening [7], however the accuracy of such approach was affected by human errors [8]. The limitation of human visual inspection led to the adoption of other scientific techniques for damage detection on structures [9]. The adoption of a probabilistic approach for system analysis failed to yield reliable results [10,11]. Therefore, the use of computer vision techniques was introduced [12,13,14,15,16]. The act of damage detection on structures over the years for structural health monitoring (SHM) has significantly advanced over time [17,18,19].
Machine learning (ML) is a branch of artificial intelligence (AI) that is defined as a machine’s ability to mimic human intelligent behavior. It deals with the development of systems that work without following explicit instructions [20]. The key approaches of ML include regression, classification, and clustering. Applications of ML techniques cut across engineering, agriculture, commerce, and medicine, among many others [12,21,22]. Machine learning is broadly classified into four categories: supervised, unsupervised, semi-supervised, and reinforcement learning [23,24,25]. Deep learning is a part of machine learning which depends on a set of algorithms arranged and executed to show abnormal or uncommon information as a reflection of system behavior [26]. The relationship between AI-related techniques is as presented in Figure 1.
Figure 1 shows the relationship between AI and the related techniques used in this study. It is widely used for various forms of large infrastructure condition inspection such as bridges, skyscrapers, and airplanes. Furthermore, deep learning models show satisfactory performance in anomaly and damage detection in structures. The use of the YOLOv4 algorithm for SHM based on stronger performance parameters, such as weighted residual connections (WRC), cross-stage partial connections (CSP), cross-mini-batch normalization (CmBN), self-adversarial training (SAT), mish activation, mosaic data augmentation, and DropBlock regularization, with a combination of others, achieved maximum accuracy of less than 70% in most cases. However, on the other hand, the convolutional neural network performed relatively better, with a high level of accuracy measuring approximately 91.00% [27,28]. Some common deep learning algorithms used for object detection on images include the following:
You Only Look Once (YOLOv4): This algorithm makes a prediction or classification of objects on images by looking at the source image only once, that is, using one forward propagation for object detection and classification in real-time using CNN [29]. Though this algorithm balances speed and accuracy with high performance, its major disadvantage is the inability to detect small objects within close-range locations, as stipulated by Nath and Behzadan [29].
Convolutional neural network (CNN): This technique processes data that have a grid pattern, such as images. It is a mathematical construct that is composed of convolution, pooling, and fully connected layers. The convolution and pooling layers perform the feature extraction on images, while the fully connected layer maps the extracted features into the final output, known as classification or prediction [18,30,31].
Bao and Li [32] examined the structural health condition using the CNN algorithm and recommended the development of a unique model to suit different data sources of historic or time series datasets. To identify cracks in old concrete bridges, Kim et al. [33] suggested the use of an unmanned aerial vehicle (UAV) and region-based convolutional neural network (R-CNN) technique. Crack images were used to fine-tune a pre-trained R-CNN for crack detection, and IPTs were used to quantify the discovered cracks. Khodabandehlou et al. [34] developed a two-dimensional and eleven-layer CNN system. The detection results had a 100% level of accurate, robust, and sensitive to changes in the structural condition. Validation was performed using acceleration data from shaking table tests of a reinforced concrete bridge model under various loads. On the other hand, Won et al. [35] used a two-dimensional system of CNN to achieve a high accuracy of over 99%. The study was based on numerical simulations. Duan et al. [36] suggested a CNN-based method for detecting bridge degradation through acceleration responses. To create acceleration reactions, a tied-arch bridge was subjected to numerical analysis with various damage situations. Damage detection performance was examined using the acceleration responses and generated Fourier spectra as datasets. The use of CNN yielded a robust performance when compared to other traditional methods. Tang et al. [37] created a five-layer CNN to detect and classify seven distinct anomalies in anomalous monitoring data from a SHM system. Classification accuracies of the system were measured as: 96.5%, 98.9%, 86.1%, 81.5%, 92.3%, 91.0%, and 69.4%. The training process involved further splitting of the dataset into ratios from 1% to 3% for imbalanced and balanced data, which prolonged the classification process.
Park et al. [38] proposed the use of a vision-based technique for the detection of bolt-loosening connecting tubular steel segments of the wind turbine tower structure. They achieved an accuracy of up to 99.98% using the proposed algorithm. The limitation of the study was based on its laboratory-based (scaled) experimentation. This approach is suggested to be inappropriate in real life [20]. Cha et al. [39] proposed a vision-based detection algorithm to detect loosened bolts based on the thought transform and the support vector machine. The linear support vector machine achieved excellent results in detecting loosened bolts. To automatically detect nuts and bolts based on computer vision-based system recognition, Pramanik [13] developed an image processing and recognition system using MATLAB that achieved a nuts and bolts’ detection accuracy of up to 74%. He [40] utilized the computer vision technology for the automatic detection of bolts and nuts. The author utilized the Hough circle transformation and achieved an efficient accuracy for the detection of bolts and nuts for engineering and manufacturing purposes. Again, Dhenge et al. [15] utilized the computer vision-based object-sorting and fault detection algorithm using the artificial neural networks. The proposed algorithm achieved an accuracy of 87%. Zhang et al. [41] utilized the deep learning technique for bolt-loosening detection. They achieved a test result of 0.9503 precision and the study was targeted at achieving structural health monitoring. Recently, Zhou and Huo [16] utilized the deep learning and artificial intelligence to detect fractured bolts on bridges and achieved an accuracy of 89.14%.
According to Pedamkar [23], machine learning attempts to draw insights from data or mimic the human intelligence. The adoption of machine learning techniques for structural health monitoring has gained wide acceptance in recent times [8,42,43,44,45,46]. Azimi and Pekcan [28] employed the CNN approach to develop a novel model for SHM through the measurement of compressed response data using a transfer learning-based technique. Findings of the study indicated that the CNN provided a suitable platform for the transfer learning algorithm which recorded reliable output for the condition classification. According to Qiao et al. [47], the use of CNN for the detection of cracks and exposed steel bars on bridge structures performed relatively poorly, with an average accuracy of classification of less than 70%. Motivated by Zhou and Huo [16] and Svendsen et al. [20], this study proposes a novel bolts and nuts and holes detection approach using machine learning techniques to detect the possibility of bridge failure.
Long- and short-term memory (LSTM) algorithm: The LSTM is a deep learning architecture that uses an artificial recurrent neural network (RNN). Since there might be lags of undetermined duration between critical occurrences in a time series, the LSTM networks are well-suited for categorizing, processing, and making predictions based on time series data. The output of an LSTM at any given time is determined by three factors: the network’s current long-term memory (known as the cell state), the output at the previous point in time (known as the prior hidden state), and the current time step’s input data. LSTMs employ a number of ‘gates’ which regulate how data in a sequence enters, is stored in, and exits the network. A typical LSTM has three gates: a forget gate, an input gate, and an output gate. These gates are each their own neural network and can be thought of as filters.
On the other hand, other techniques such as the segmentation approach adopted by the YOLOv4 algorithm yielded a classification accuracy of approximately 87.17% [48]. Zhang et al. [49] used the YOLOv3 algorithm to achieve a performance accuracy of 80%, with an improvement of 13% compared to previous models.
The overall performance of the selected algorithms is examined using the analysis of variance (ANOVA) approach for the test of hypothesis. The null hypothesis stated that there is no statistically significant difference between the detection and classification accuracies of the models used when tested at a 5% level of significance for reliable and effective decisions [50].
The aim of this research is to develop a novel framework (simple and fast convergence) for the detection of nut–bolt loss in steel bridges using deep learning techniques. The specific objectives are to: design a framework for the detection of nuts and bolts and nut holes on images for damage detection using deep learning techniques, implement the designed framework using Python programming, and to evaluate the performance of the designed framework using matrices of the proposed model based on the accuracy, F1 score, recall value, confusion matrix, average IoU, and mean average precision (MAP).
2. Materials and Methods
2.1. Data Collection and Description
The datasets in the form of images and videos considered in this study were generated at various sites of steel truss bridges in Wuxi of the Jiangsu province, China. The video dataset was further pre-processed to extract frames that were then fed into the deep learning models. To extract the relevant information on images of the steel truss bridges under investigation for the detection of bolts and nuts, and nut holes, there was a need for pre-processing to adjust the original picture size considering the pixels and the suitability of the object perspective view for better identification and classification.
A steel bridge structure is made up of elements joined using nuts and bolts with plates fastened to create a rigid structure. Bolts are very important elements of a steel truss bridge, and it is therefore very important to detect and classify conditions of bolts and nuts on bridges for efficient and reliable structural health condition monitoring [37]. A typical section of a steel truss bridge is as shown in Figure 1.
Given an image as shown in Figure 2, the image was first split into an 8 × 8 block and then converted into grayscale. Adoption of the 8 × 8 block matrix was performed to achieve a simple system that could aid fast convergence. The grayscale image format was then ready to be fed into the CNN and LSTM for classification and identification of bolts and nuts, and empty nut holes, on the steel bridges, as identified in Figure 3 and Figure 4 for the plan and inclined views, respectively. Again, for the individual bolts and nuts and the empty nut holes to be individually detected, the YOLOv4 algorithm was utilized.
2.2. Model Description
The methods used in this study include the convolutional neural network (CNN), long- and short-term memory (LSTM), and the You Only Look Once (YOLOv4) machine learning techniques.
2.2.1. Convolutional Neural Network (CNN)
The training of a CNN aims at determining values of all parameters based on a minimized loss function. The loss function measures the discrepancy between the targeted outputs from the input used. According to Li et al. [51], a probabilistic approach known as the stochastic gradient descent is very useful. Minimization of the loss function is estimated as the impact of small variations of parameter values in the CNN. The minimization function is given in terms of a differential function of all parameters and the loss function value, as shown in Equation (1):
(1)
where W represents the value of parameters that could minimize the loss function, E is the loss function, is the learning rate, and k is the number of iterations. In its simplest form, the loss function, E, is estimated using Equation (2):(2)
where N represents the batch size, is the i-th target vector, and is the i-th output vector.The CNN machine learning algorithm was used in this research to detect vision-based infrastructural failures such as bolts and nuts and nut holes on images and videos of bridges. The method was used for the development of a framework to classify and detect images/videos that have bolt and nuts and nut holes for structural health monitoring [52]. The dataset of images and videos captured from bridges contained bolts and nuts and nut holes. The images were pre-processed by splitting them into 8 × 8 blocks for speedy processing and accurate object detection performance, then fed into the CNN deep learning framework for identification and detection of infrastructural failures. This method is adequate since the study seeks to draw meaningful insights from images/videos and detect faults on steel bridges based on bolts and nuts and empty nut holes for the purpose of structural health monitoring. The CNN model analyzed the images using stepwise feature extraction by convolution (C1 and C2) and pooling (P1 and P2) to achieve a higher and accurate detection level on each class and produce results in charts and tables for easy understanding, as seen in Figure 5.
Figure 5 shows that taking a bridge image into consideration, an 8 × 8 sub-image was first extracted from the image and converted into grayscale format. This approach was performed using a sliding window approach to cover the entire image under consideration. The grayscale blocks were then fed into CNN, taking advantage of the convolution and pooling approach. The process was repeated using random parameters until satisfactory collections were obtained. This way, bolts and nuts and nut holes on bridge structures under consideration were detected and classified accordingly.
2.2.2. Long- and Short-Term Memory (LSTM)
In the LSTM algorithm, the short-term memory, also known as the hidden state, and the long-term memory feed the input data into the system. The system was basically divided into three major stages: the forget gate, the input gate, and the output gate. At the forget gate, the system tends to forget irrelevant information that is sent into the system, while the input gate allows addition of or updating new information that may be relevant for the output gate to produce the results based on mathematical functions in the form of classifications. The LSTM methodology used in this study is presented in Figure 6.
The relationship between the mathematical functions of the system is as presented below.
Forget Gate:
The mathematical expression for the forget gate is as shown in Equation (3):
(3)
where is the current stage, is the weight associated with the input, is the hiddent state of the previous stage, and is the weight matrix associated with the hidden state, such that the value of varies between 0 and 1 for every cell state when a sigmoid function is applied over it based on the conditions shown in Equations (4) and (5), respectively:(4)
(5)
Input Gate:
The mathematical expression for the input gate is as shown in Equations (6) and (7):
(6)
(7)
where is the input at the current stage, is the weight matrix of the input, is the hidden state of the previous stage, and is the weight matrix of the input associated with the hidden state. The update of new information into the system is expressed as shown in Equation (8):(8)
Therefore, updating the cell state is based on the function in Equation (9):
(9)
Output Gate:
Estimation of the output values employs Equations (10) and (11):
(10)
such that Equation (11) is:(11)
2.2.3. You Only Look Once (YOLOv4)
To correctly detect the number of bolts and nuts and nut holes on steel truss bridges, the state-of-the-art object detection technique known as You Only Look Once (YOLOv4) was adopted and utilized [27]. This is an improvement over the CNN which could be utilized for real-time object detection. The architectural system of the YOLOv4 used is shown in Figure 7.
Figure 7 shows that the image source undergoes feature extraction at the backbone stage using pooling codes. The neck collects feature maps from different stages of the backbone using the path aggregator network process. Finally, the dense and sparse predictions are used for object detection by finding the region where the object is present using boundaries.
2.3. Model Development
The input images for detection and classification of the bolts and nuts (class 0) and nut holes (class 1) on the steel truss bridge were input for analysis, and the CNN, LSTM, and YOLOv4 deep learning techniques were used. The dataset was divided into two sets: 21 and 10 samples, which represents 70% and 30% of the total samples in the dataset for model training and testing, respectively. In this study, Python, Google Colab, and the SKlearn library tools were used in the development of the investigation model. The Sklearn was used to synchronize the algorithms in the framework, while Python was used to build the logic that connected the user interface and the algorithms.
2.4. Model Evaluation
In this study, the following parameters were used to evaluate the performance of the object detection models:
(12)
where and are the true positive and false positive classification of i, respectively.(13)
where FN denotes the false negative. The recall value is represented as the diagonal values of a normalized confusion matrix.(14)
(15)
(16)
where APi is the average precision value for the i-th class and C is the total number of classes under consideration.(17)
The F1 score measures the test accuracy based on the estimated precision and recall values. A test’s precision is the number of true positive results divided by the number of all positive results, including those not identified correctly, while the recall value is the number of true positive results divided by the number of all samples that were obtained as positive. The average intersection over union (IoU) is the matrix that defines overlap between separate classifications, with magnitudes of 1 and zero for perfect and worse classifications, respectively. The mean average precision (MAP) refers to the average of all average precision values for all the classes under consideration. It measures the comparison between the actual and the detected classifications based on the recall and precision parameters. A confusion matrix generates a matrix that describes the overall performance of the model. These performance evaluation criteria are suitable for identifying and tracking the scenarios that the system is based on [33].
2.5. Test of Hypothesis
The one-way analysis of variance (ANOVA) method was used for hypothesis testing. It is a statistical method used for determining the variations between groups of classified data through comparison of means of the categorical dataset [50].
3. Results and Discussion
3.1. Results of the CNN
The CNN model achieved average detection and classification accuracy of 95.60%, with an F1 score of 1.00, a precision value of 1.00, and a 1.00 recall value. Figure 8 shows the training loss vs. epoch graph performance of the CNN model for the detection and classification of bolts and nuts and nut holes on the steel truss bridges.
Figure 8 revealed that both the training and validation curves were proportional to each other. In other words, as one variable decreased, the other also decreased. Both curves were stable at some point, which meant that the model depicted in the graph was fitting and not overfitting or underfitting. This showed that object classifications and detection by the CNN model for bolts and nuts and nut holes on bridge images were accurate. Moreover, Figure 8 showed that at 100 epochs, the training loss reduced reasonably well, and the model’s performance became very stable.
Based on the optimization capacity of the CNN model, we always expect a lower loss to produce a better model, especially when considering the training vs. epochs. Additionally, Figure 8 showed that the loss tends towards zero around 100 epochs and more. This indicated that the proposed model performed well for the training and validation datasets, especially after 100 epochs, as reported in [35,36,37]. It also agrees with the findings of Ghiasi et al. [53]. On the other hand, Figure 9 shows the training accuracy vs. epoch graph of the CNN model.
Figure 9 revealed that above the 75th epoch, there were consistent results, with a high accuracy of approximately 1. To further show the strength of our proposed model, the classification and detection results were presented in a confusion matrix, as shown in Figure 10. The confusion matrix shows the classification performance of the bolts and nuts and nut holes on the bridge images considered for this experiment.
Figure 10 revealed that all the images identified as bolts and nuts (class 0) were classified correctly at 1.00 (100%) accuracy, while for the nut holes on the bridge images (class 1), all the images were also correctly classified at 1.00 (100%) accuracy. From the experiments, as shown in the confusion matrix, the image values from bolts and nuts and nut holes were excellently classified by the CNN, as reported in [33,35,36,37].
3.2. Results of the LSTM
The performance of the LSTM model for object detection and classification achieved an average accuracy of 80.00%, and the F1 score of 0.8125, while the precision was 0.5758 and the recall value was 0.7941. Figure 11 revealed that at 1 epoch, the training loss decreased for both the training and validation sets and the model’s performance became a bit stable. It was observed that at 2 epochs, the loss totally converged towards 0, especially for the validation sets, making the LSTM perform reasonably well.
On the other hand, Figure 12 revealed that the training accuracy increased towards 0.924 (92.4%) as the epochs tended towards 2. The LSTM’s training datasets could not perform better than the validation datasets, as shown in Figure 12. This relatively poor performance was attributed to the data size and the presence of outliers [22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,52].
To summarize the power of LSTM with respect to the classification/detection of bolts and nuts and holes on bridges, the confusion matrix is presented in Figure 13.
Figure 13 revealed that images identified as bolts and nuts (class 0) were classified correctly at 0.92 (92%) accuracy, while for the holes on the bridge images (class 1), the objects were also correctly classified, at 0.94 (94%) accuracy. From the experiments, the image values from bolts and nuts were misclassified at 0.08 (8%) and the image values from holes were misclassified at 0.06 (6%) using the LSTM algorithm.
3.3. Results of the YOLOv4 Technique
To effectively test this method, ten images of bolts and nuts and ten images of nut holes from steel truss bridges were extracted and used for the training of this technique. The MAP based on the saved weights that produced the best detection results for the bolts and nuts and nut holes during the training phase was as presented in Figure 14.
Figure 14 revealed that, as the number of iterations increased towards 1300, the loss based on the MAP reduced towards 0.0. This indicated that the proposed method converged better at the iteration of 1300. A typical illustration of detection accuracies for the bolts and nuts (b&n) and nut holes (h) is shown in Figure 15.
The performance of the proposed technique on the first bridge image showed the results presented in Table 1.
For a detection count of 55, ground truth of 38, the class 0 (h) achieved an average precision (AP) of 100%, with a true positive (TP) of 2 and a false positive (FP) of 2. For class 1 (h&b), the proposed model achieved an average precision (AP) of 90.26% using a TP = 33 and FP = 3. Based on a threshold of 0.25, a precision of 0.88, recall of 0.92, and F1 score of 0.90 were achieved for the detection of bolts and nuts, and holes. Moreover, using a TP = 35, FP = 5, and FN = 3, an average IoU = 57.13% was achieved for the detection results. This produced a mean average precision ([email protected]) of 0.951277, or 95.13% based on an IoU threshold of 50%. To further justify the authenticity of the proposed method, a second bridge image was tested, and the detection results were as presented in Figure 16.
To further justify the authenticity of the proposed method, a second bridge image was tested, and the detection results are presented in Table 2.
Additionally, the proposed method was demonstrated in a video that had several bolts and nuts, and holes. Interestingly, the proposed method accurately detected the bolts and nuts and the holes on bridges based on the video under investigation, as shown in Figure 16.
Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 and Table 1, Table 2 and Table 3 clearly showed that the proposed state-of-the-art object detection YOLOv4 has the capability to detect and classify bolts and nuts and holes, not only in images but also in videos, with very high accuracies up to 90.0%. Comparing these results with [13,15,16], it was observed that in most cases, our proposed method outperformed theirs. From this analysis, the CNN model performed relatively better for object classification and detection of bolts and nuts and nut holes on steel truss bridges compared to the LSTM model. However, to detect the individual bolts and nuts and nut holes with the corresponding individual accuracies, the YOLOv4 proved to be very effective and detected with an approximate average accuracy of up to 90.0%.
3.4. Results of Analysis of Variance (ANOVA)
Using a one-way ANOVA for the test of the hypothesis which states that there is no statistically significant difference between the classification accuracies of the machine learning algorithms used at the 5% level of significance, detailed results of the analysis revealed that the Fcalculated (1.303) < Fcritical (3.285), which implies that there was no significant difference between classifications made by the models used, hence failing to reject the null hypothesis.
4. Conclusions
Based on the findings of this study, as obtained from the performance analysis of the novel framework built using the CNN, LSTM, and YOLOv4 techniques for the classification and detection of nuts and bolts and nut holes on images and videos of steel truss bridges, it is concluded that the CNN model provided an efficient and reliable platform for the object detection due to its ability to segregate and extract features on the image source in bits for a detailed analysis, which yielded a relatively high level of accuracy of 95.60%, compared to the LSTM and YOLOv4 which had 93.00% and 76.50% levels of accuracy, respectively. The evaluation of other model performance parameters such as the confusion matrices, F1 score, recall value, average IoU, and mean average precision (MAP) values also showed very promising magnitudes. A statistical test of the hypothesis using ANOVA at the 5% level of significance revealed that there was no statistically significant difference between the results of object classifications and detection using the techniques adopted in this study.
It is therefore recommended that the model built using the CNN technique for object detection of nuts and bolts and nut holes on images of steel truss bridges be used as an effective SHM system for efficient, reliable, and sustainable infrastructural development. Additionally, it is recommended that for individual detection and presentation of bolts and nuts, and nut holes, the YOLOv4 algorithm could be utilized.
Data curation, P.S.; resources, K.Y.; software, H.M.B.; supervision, Z.-J.L., X.-L.X. and X.-H.L.; writing—original draft, K.A. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
Not applicable.
The authors declare no conflict of interest.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Accuracy of classification for image sample I.
Object | b&n: | b&n: | b&n: | b&n: | b&n: | b&n: | b&n: | h: | h: | h: |
---|---|---|---|---|---|---|---|---|---|---|
Accuracy | 96% | 89% | 99% | 97% | 100% | 98% | 99% | 99% | 45% | 53% |
Note: b&n = bolts and nuts; h = nut holes.
Accuracy of classification for image sample II.
Object | b&n: | b&n: | b&n: | b&n: | b&n: | b&n: | b&n: | h: | h: |
---|---|---|---|---|---|---|---|---|---|
Accuracy | 97% | 96% | 95% | 95% | 95% | 95% | 97% | 99% | 91% |
Note: b&n = bolts and nuts; h = nut holes.
Results of the proposed model based on the video sample.
Object | b&n: | b&n: | b&n: | b&n: | b&n: | b&n: | b&n: | b&n: | b&n: | b&n: | b&n: | b&n: |
Accuracy | 99% | 99% | 98% | 97% | 94% | 93% | 90% | 90% | 90% | 89% | 89% | 89% |
Object | b&n: | b&n: | b&n: | b&n: | b&n: | h: | b&n: | b&n: | b&n: | h: | b&n: | b&n: |
Accuracy | 88% | 75% | 73% | 65% | 63% | 39% | 61% | 61% | 60% | 26% | 54% | 54% |
References
1. Badkar, M. Look at All the Major Chinese Bridges That Have Collapsed In The Recent Years. 2012; Available online: https://www.businessinsider.com/china-bridge-collapses-2012-8?r=US&IR=T (accessed on 6 August 2022).
2. Aitken, P. 11 of the Biggest Structural Failures in History. 2019; Available online: https://africa.businessinsider.com/strategy/11-of-the-biggest-structural-failures-in-history/4l9qrf7 (accessed on 6 August 2022).
3. Bridge Masters. 9 Common Reasons for Bridge Failures. 2017; Available online: https://bridgemastersinc.com/9-common-reasons-for-bridge-failures/ (accessed on 6 August 2022).
4. Zhou, M.; Yang, D.; Hassanein, M.F.; Zhang, J.; An, L. Failure analysis of high-strength bolts in steel truss bridges. Proc. Inst. Civ. Eng.-Civ. Eng.; 2017; 170, pp. 175-179. [DOI: https://dx.doi.org/10.1680/jcien.16.00037]
5. Dravid, S.; Yadav, J.; Kurre, S.K. Comparison of loosening Behavior of Bolted Joints using Plain and Spring Washers with full-threaded and Plain Shank Bolts. Mech. Based Des. Struct. Mach.; 2021; [DOI: https://dx.doi.org/10.1080/15397734.2021.2008258]
6. Novelo, X.E.A.; Chu, H.-Y. Application of vibration analysis using time-frequency analysis to detect and predict mechanical failure during the nut manufacturing process. Adv. Mech. Eng.; 2022; 14, [DOI: https://dx.doi.org/10.1177/16878132221082758]
7. Federal Highway Administration (FHWA). National Bridge Inspection Standards; Federal Register; FHWA: Washington, DC, USA, 2004; Volume 69.
8. Bull, L.; Worden, K.; Manson, G.; Dervilis, N. Active learning for semi-supervised structural health Monitoring. J. Sound Vib.; 2018; 437, pp. 373-388. [DOI: https://dx.doi.org/10.1016/j.jsv.2018.08.040]
9. Singh, P.; Ahmad, U.F.; Yadav, S. Structural Health Monitoring and Damage Detection through Machine Learning approaches. Sustain. Energy Syst. Innov. Perspect.; 2020; 220, 01096. [DOI: https://dx.doi.org/10.1051/e3sconf/202022001096]
10. Rogers, T.J.; Worden, K.; Fuentes, R.; Dervilis, N.; Tygesen, U.T.; Cross, E.J. A Semi-Supervised Bayesian Non-Parametric Approach to Damage Detection. Proceedings of the 9th European Workshop on Structural Health Monitoring; Manchester, UK, 10–13 July 2018.
11. Bull, L.A.; Gardner, P.; Rogers, T.J.; Cross, E.J.; Dervilis, N.; Worden, K. New Modes of Inference for Probabilistic SHM. European Workshop on Structural Health Monitoring. EWSHM 2020. Lecture Notes in Civil Engineering; Rizzo, P.; Milazzo, A. Springer: Cham, Switzerland, 2021; Volume 128, [DOI: https://dx.doi.org/10.1007/978-3-030-64908-1_39]
12. Alzubi, J.; Nayyar, A.; Kumar, A. Machine learning from theory to algorithms: An overview. Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2018; Volume 1142, 012012.
13. Pramanik, T. Computer Vision Based Recognition of Nut and Bolt System. Int. J. Sci. Prog. Res.; 2014; 4, 1.
14. Cha, Y.; Kisung, Y.; Choi, W. Vision-based detection of loosened bolts using the Hough transform and support vector machines. Autom. Constr.; 2016; 71, pp. 181-188. [DOI: https://dx.doi.org/10.1016/j.autcon.2016.06.008]
15. Dhenge, A.; Keskar, P.; Kuhikar, A.; Kawadkar, P.; Chaudhary, T.; Palasmode, P. Computer Vision Based Object Sorting & Fault Detection Using Ann. Int. J. Eng. Res. Electron. Commun. Eng.; 2015; 2, pp. 1-4.
16. Zhou, J.; Huo, L. Computer Vision-Based Detection for Delayed Fracture of Bolts in Steel Bridges. J. Sens.; 2021; 2021, 8325398. [DOI: https://dx.doi.org/10.1155/2021/8325398]
17. Alshboul, O.; Shehadeh, A.; Almasabha, G.; Almuflih, A.S. Extreme Gradient Boosting-Based Machine Learning Approach for Green Building Cost Prediction. Sustainability; 2022; 14, 6651. [DOI: https://dx.doi.org/10.3390/su14116651]
18. Bui-Ngoc, D.; Nguyen-Tran, H.; Nguyen-Ngoc, L.; Tran-Ngoc, H.; Bui-Tien, T.; Tran-Viet, H. Damage detection in structural health monitoring using hybrid convolution neural network and recurrent neural network. Frat. Integrità Strutt.; 2022; 59, pp. 461-470.
19. Amjoud, A.B.; Amrouch, M. Convolutional Neural Networks Backbones for Object Detection. Image and Signal Processing. ICISP 2020. Lecture Notes in Computer Science; El Moataz, A.; Mammass, D.; Mansouri, A.; Nouboud, F. Springer: Cham, Switzerland, 2020; Volume 12119, pp. 282-289. [DOI: https://dx.doi.org/10.1007/978-3-030-51935-3_30]
20. Svendsen, B.T.; Frøseth, G.T.; Øiseth, O.; Rønnquist, A. A data-based structural health monitoring approach for damage detection in steel bridges using experimental data. J. Civ. Struct. Health Monit.; 2022; 12, pp. 101-115. [DOI: https://dx.doi.org/10.1007/s13349-021-00530-8]
21. Flah, M.; Nunez, I.; Chaabene, W.B.; Nehdi, M.L. Machine learning algorithms in civil structural health monitoring: A systematic review. Arch. Comput. Methods Eng.; 2020; 28, pp. 2621-2643. [DOI: https://dx.doi.org/10.1007/s11831-020-09471-9]
22. Brown, S. Machine Learning Explained. 2022; Available online: https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained (accessed on 6 August 2022).
23. Pedamkar, P. Types of Machine Learning. 2020; Available online: https://www.educba.com/types-of-machine-learning/ (accessed on 6 August 2022).
24. Bull, L.A.; Worden, K.; Dervilis, N. Towards semi-supervised and probabilistic classification in structural health monitoring. Mech. Syst. Signal Process.; 2020; 140, 106653. [DOI: https://dx.doi.org/10.1016/j.ymssp.2020.106653]
25. Li, Y.-F.; Zhou, Z.-H. Towards making unlabelled data never hurt. IEEE Trans. Pattern Anal. Mach. Intell.; 2015; 37, pp. 175-188.
26. Kumar, P.R.; Manash, E.B.K. Deep learning: A branch of machine learning, Internatinal conference on computer vision and machine learning. IOP Conf. Ser. J. Phys.; 2019; 1228, 012045. [DOI: https://dx.doi.org/10.1088/1742-6596/1228/1/012045]
27. Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv; 2020; arXiv: 2004.10934
28. Azimi, M.; Pekcan, G. Structural health monitoring using extremely compresses data through deep learning. Comput.-Aided Civ. Infrastrcrue Eng.; 2019; 35, pp. 597-614. [DOI: https://dx.doi.org/10.1111/mice.12517]
29. Nath, N.D.; Behzadan, A.H. Deep Convolutional Networks for Construction Object Detection Under Different Visual Conditions. Front. Built Environ.; 2020; 6, 97. [DOI: https://dx.doi.org/10.3389/fbuil.2020.00097]
30. Dong, A.; Loy, C.C.; He, K.; Tang, X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell.; 2015; 38, pp. 295-307. [DOI: https://dx.doi.org/10.1109/TPAMI.2015.2439281]
31. Azimi, M.; Eslamlou, A.D.; Pekcan, G. Data-Driven Structural Health Monitoring and Damage Detection through Deep Learning: State-of-the-Art Review. Sens. Struct. Health Monit. Seism. Prot.; 2020; 20, 2778. [DOI: https://dx.doi.org/10.3390/s20102778] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32414205]
32. Bao, Y.; Li, H. Machine learning paradigm for structural health monitoring. Struct. Health Monit.; 2020; 20, pp. 1353-1372. [DOI: https://dx.doi.org/10.1177/1475921720972416]
33. Kim, H.; Ahn, E.; Shin, M.; Sim, S. Crack and Non-crack Classification from Concrete Surface Images Using Machine Learning. Struct. Health Monit.; 2018; 18, pp. 725-738. [DOI: https://dx.doi.org/10.1177/1475921718768747]
34. Khodabandehlou, H.; Pekcan, G.; Fadali, M.S. Vibration-based structural condition assessment using convolution neural networks. Struct. Control Health Monit.; 2018; 26, e2308. [DOI: https://dx.doi.org/10.1002/stc.2308]
35. Won, J.; Park, J.W.; Jang, S.; Jin, K.; Kim, Y. Automated Structural Damage Identification Using Data Normalization and 1-Dimensional Convolutional Neural Network. Appl. Sci.; 2021; 11, 2610. [DOI: https://dx.doi.org/10.3390/app11062610]
36. Duan, Y.; Chen, Q.; Zhang, H.; Yun, C.B.; Wu, S.; Zhu, Q. CNN-based damage identification method of tied-arch Bridge using spatial-spectral information. Smart Struct. Syst. Int. J.; 2019; 23, pp. 507-520.
37. Tang, Z.; Chen, Z.; Bao, Y.; Li, H. Convolutional neural network-based data anomaly detection method using multiple information for structural health monitoring. Struct. Control. Health Monit.; 2019; 26, e2296. [DOI: https://dx.doi.org/10.1002/stc.2296]
38. Park, J.; Kim, T.; Kim, J. Image-based bolt-loosening detection technique of bolt joint in steel bridges. Proceedings of the 6th International Conference on Advances in Experimental Structural Engineering (6AESE); Champaign, IL, USA, 1–2 August 2015.
39. Cha, Y.; Choi, W.; Suh, G.; Mahmoudkhani, S.; Buyukozturk, O. Autonomous Structural Visual Inspection Using Region-Based Deep Learning for Detecting Multiple Damage Types. Comput.-Aided Civ. Infrastruct. Eng.; 2017; 33, pp. 731-747. [DOI: https://dx.doi.org/10.1111/mice.12334]
40. He, H. Automatic Assembly of Bolts and Nuts Based on Machine Vision Recognition. Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2021; Volume 2113, 012033.
41. Zhang, Y.; Sun, X.; Loh, K.J.; Su, W.; Xue, Z.; Zhao, X. Autonomous bolt loosening detection using deep learning. Struct. Health Monit.; 2020; 19, pp. 105-122. [DOI: https://dx.doi.org/10.1177/1475921719837509]
42. Dervilis, N.; Papatheou, E.; Antoniadou, I.; Cross, E.J.; Worden, K. On the usage of active learning for SHM. Proceedings of the ISMA2016; Leuven, Belgium, 19–21 September 2016.
43. Kim, B.; Cho, S. Automated vision-based detection of cracks on concrete surfaces using a deep learning technique. Sensors; 2018; 18, 3452. [DOI: https://dx.doi.org/10.3390/s18103452]
44. Cha, Y.; Choi, W. Deep Learning-Based Crack Damage Detection Using Convolutional Neural Networks. Comput.-Aided Civ. Infrastruct. Eng.; 2017; 32, pp. 361-378. [DOI: https://dx.doi.org/10.1111/mice.12263]
45. Bergs, T.; Holst, C.; Gupta, P.; Augspurger, T. Digital image processing with deep learning for automated cutting tool wear detection. Procedia Manuf.; 2020; 48, pp. 947-958. [DOI: https://dx.doi.org/10.1016/j.promfg.2020.05.134]
46. Sen, D.; Aghazadeh, A.; Mousavi, A.; Nagarajaiah, S.; Baraniuk, R.; Dabak, A. Data-driven semi-supervised and supervised learning algorithms for health monitoring of pipes. Mech. Syst. Signal Process.; 2019; 131, pp. 524-537. [DOI: https://dx.doi.org/10.1016/j.ymssp.2019.06.003]
47. Qiao, W.; Liu, B.Q.; Wu, X.; Li, G. Computer Vision-Based Bridge Damage Detection Using Deep Convolutional Networks with Expectation Maximum Attention Module. Sensors; 2021; 21, 824. [DOI: https://dx.doi.org/10.3390/s21030824]
48. Yu, W.; Nishio, M. Multilevel structural Components detection and segmentation towards computer vision-based inspection. Sensors; 2022; 22, 3502. [DOI: https://dx.doi.org/10.3390/s22093502] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35591192]
49. Zhang, C.; Chang, C.; Jamshidi, M. Bridge damage detection using single-stage detector and field inspection images. arXiv; 2018; arXiv: 1812.10590
50. Heiman, G.W. Basic Statistics for Behavioural Sciences; 6th ed. Cengage Learning: Belmont, NS, Canada, 2011.
51. Li, B.; Wang, K.C.P.; Zhang, A.; Yang, E.; Wang, G. Automatic Classification of Pavement Crack using deep convolutional neural network. Int. J. Pavement Eng.; 2018; 21, pp. 457-463. [DOI: https://dx.doi.org/10.1080/10298436.2018.1485917]
52. Quqa, S.; Martakis, P.; Movsessian, A.; Pai, S.; Reuland, Y.; Chatzi, E. Two-step approach for fatigue crack detection in steel bridges using convolutional neural networks. J. Civ. Struct. Health Monit.; 2022; 12, pp. 127-140. [DOI: https://dx.doi.org/10.1007/s13349-021-00537-1]
53. Ghiasi, A.; Moghaddam, M.K.; Ng, C.T.; Sheikh, A.H.; Shi, J.Q. Damage classification of in-service steel railway bridges using a novel vibration-based convolutional neural network. Eng. Struct.; 2022; 264, 114474. [DOI: https://dx.doi.org/10.1016/j.engstruct.2022.114474]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
The early detection of bolts and nuts’ loss on bridges has a huge tendency of averting bridge collapse. The aim of this research is to develop a novel framework for the detection of bolt–nut losses in steel bridges using deep learning techniques. The objectives include: to design a framework for the detection of nuts and bolts and nut holes using deep learning techniques, to implement the designed framework using Python programming, and to evaluate the performance of the designed framework. Convolutional neural network (CNN) and long- and short-term memory (LSTM) techniques were employed using 8 × 8 blocks of images of bridges as inputs. Based on the proposed models, which considered the CNN in its ordinary form, and combined with the LSTM and You Only Look Once (YOLOv4) algorithms, the CNN achieved average classification accuracy of 95.60% and the LSTM achieved an accuracy of 93.00% on the sampled images. The YOLOv4 algorithm, which is a modified version of the CNN with single forward propagation, was utilized, and the detection accuracy was 76.5%. The relatively high level of detection accuracy recorded by the CNN is attributed to its stepwise extraction by convolution and pooling processes. However, a statistical test of the hypothesis at the 5.0% level of significance revealed that there was no statistically significant difference between object detection and classifications among the models used in the built framework. Therefore, the use of the CNN model is recommended for the detection of nuts and bolts and nut holes on steel truss bridges for effective structural health monitoring (SHM) purposes based on its high level of detection accuracy and speed.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 College of Civil Engineering, Nanjing Tech University, Nanjing 211800, China
2 Suzhou Port and Shipping Business Development Center, Suzhou 215004, China
3 School of Information and Communication Engineering, Nanjing University of Information Science and Technology, Nanjing 210044, China