1. Introduction
According to the Global Forest Resources Assessment report released by FAO in 2020, there are 4.06 billion hectares of forest in the world and, since 2015, an average of 10 million hectares of forestland has disappeared every year. The annual outbreak of forest pests will damage about 35 million hectares of forests. Some insects eat leaves and branches, but some trunk-boring pests such as red fat sacs live in the trunk and cannot be found in a timely manner, causing the deaths of trees and economic losses. Therefore, it is necessary to detect pest communities in the xylems of living trees in order to timely prevent and control pests and diseases and protect forest resources.
Mainstream methods for the detection of internal defects in living tree trunks include stress wave, ultrasonic, and computed tomography (CT) scanning [1,2,3]. However, most of these methods have their drawbacks [4,5]. For example, the stress wave method involves nailing into the trunk of the tree due to its detection characteristics [6]. This detection method will cause damage to the tree and cannot be non-invasive. The ultrasonic inspection process is susceptible to interference from the external environment, and coupling agents may cause environmental pollution [7]. CT equipment is costly and can easily cause radiation hazards to researchers [8,9]. Although nuclear magnetic resonance imaging has proven to be one of the most accurate and powerful characterization techniques in applications such as the diagnosis of wounded branch structures in beech trees [10], its high cost makes it unfeasible in most applications.
At present, electromagnetic inverse scattering is a good method for non-invasive testing. Electromagnetic inverse scattering (EMIS) [11] refers to the use of measured field data to invert the electrical property parameters of a scatterer in the detection region. A large body of research has been accumulated on the EMIS problem, which can be broadly classified into two categories. One type is the linearization of nonlinear problems, represented by the Born and Rytov approximations [12,13], which have very good imaging results for Low dielectric constants. Another type of inversion method is the nonlinear method [14], which uses an iterative idea to improve the inversion accuracy through continuous iteration; the main methods are the Contrast Source Inversion (CSI) and the Subspace Optimization Method (SOM) [15,16]. Although these nonlinear iterative methods improve accuracy compared to linear approximation methods, they have disadvantages such as sensitivity to initial values and slow convergence [17]. See Table 1 for details.
Although electromagnetic inverse scattering has a wide range of applications in engineering measurement and medical fields [18,19], when it is used to detect the interior of trees we find that, with larger trees, the relatively smaller proportion of the pest community area will make inversion imaging more difficult. This is because the electromagnetic inverse scattering is nonlinear, which easily makes the program fall into an error solution that differs from the accurate algorithm. Moreover, the scattering equation is highly ill-conditioned; that is, a small error in the input data may cause a great change in the output results. The traditional methods, whether iterative or non-iterative, do not have a very strong data processing capacity, so retrieving the model with a small proportion of pest communities is difficult.
The Deep Convolutional network is improved by studying the Contrast Source Inversion algorithm, and the Super-Resolution network is subsequently introduced. A Joint-driven Super-Resolution algorithm with a low computational load and high resolution is proposed to solve the smaller proportion of pest communities in living trees.
2. Electromagnetic Inverse Scattering Formulation
The algorithm is tested from a two-dimensional (2-D) perspective [20], and Figure 1 shows the model legend of our EMIS, constructed as a living tree–pest model. Our group will start with Maxwell’s equations [21] by constructing scattering theoretical equations to obtain the parameter optimization ideas. To find the solution of the integral equations and investigate two major types of model-driven and data-driven inversion algorithms, our group will add Super-Resolution networks based on issues such as noise clutter during imaging to form a model-driven deep learning super-resolution-based inversion algorithm [22,23,24,25].
In this paper, we use a two-dimensional plane wave as the incident wave to detect the unknown domain, denoted as , and the scattered field, denoted as The forward propagation of electromagnetic waves is composed of two equations known as the Lippmann-Schwinger equation [26]. The first is the equation of state, which describes the interaction of the wave scatters:
(1)
where is the angular frequency, is the air magnetic permeability, and denotes the Green’s function. For a two-dimensional plane wave, the is defined as(2)
denotes the second class of Hankel functions [27], denotes the number of waves in free space, is the source point of the object domain D, denotes the position vector at the receiver, denotes the total field, expressed as , and is the relative permittivity of the target scatter.The second equation, known as the data equation, describes the equivalent current radiation process for the scattered field:
(3)
In (3), the physical meaning of is the inductive contrast current density, but we define the standardized contrast current density as , making variable , express as contrast , and introducing operators and .
(4)
The control equations can then be written in two non-permissible forms.
The first is called the field-type equation, where the electric field involves two equations:
(5)
(6)
The second is called the source-type equation, in which the current involves two equations:
(7)
(8)
The electromagnetic inverse scattering imaging algorithm is based on the above two types of equations to solve for the target cross-section parameters.
3. Joint-Driven Algorithm
To overcome the nonlinearity and high morbidity of inverse scattering and to solve the problem that the pest community accounts for a small proportion of the cross-sectional area of living trees, a Joint-Driven algorithm is proposed by combining the data-driven algorithm with the model-driven algorithm. The advantage of the model-driven algorithm in solving problems lies in accurate mathematical modeling based on the mechanism information of the problem itself, while the disadvantage is its excessive dependence on the initial value. The data-driven algorithm relies on the extraction of feature information in the data to give full play to the advantages of big data. It can only solve problems by constantly learning the relationship between data. Therefore, this paper introduces the model-driven mechanism into the data-driven, integrating the advantages of both.
Because the non-destructive testing of trees using electromagnetic waves is almost carried out in the field, it causes the data we obtain to contain noise. To improve the resolution, increase the clarity, and optimize the inversion results, we add a super-resolution network after the inversion and train it in a targeted way so that it can match the optimized inversion results. We call this behavior the Joint-Driven Super-Resolution algorithm.
Neural networks are typically data-driven algorithms, with Deep Convolutional neural networks being the most capable of extracting and classifying data features [28]. The convolutional neural network structure in this paper is shown in Figure 2 and is a typical ConvNets network, consisting of four types of layers: input layer, convolutional layer, pooling layer, and fully connected layer.
is fed into the network as a 32 32 matrix form as an input into the convolutional network. The perceptual field size in the network is 5 × 5, and the feature data is evaluated in the pooling layer with a 2 × 2 pooling operation. Finally, a 100 × 100 regression value is involved as the output for imaging through the fully connected layer.
3.1. Unite the Contrast Source Inversion with a Deep Convolutional Network
To prevent the occurrence of the overfitting phenomenon in the process of neural network training—that is, when the neural network has achieved good detection accuracy on the training data, but the detection accuracy on the test data set is greatly decreased–it is necessary to normalize the cost function. In this paper, L2 normalization is adopted to improve the cost function.
(9)
If is the normalization term and the cost function before improvement is defined as , then the cost function after specification can be written as
(10)
The partial derivative of concerning the weight and bias in the network is
(11)
(12)
In (12), it shows that the partial derivative of the cost function L2 concerning bias does not change after specification and the biased gradient descent rule does not change, while the weight learning rule becomes
(13)
In the learning process, the weight is adjusted by the factor .
In the cost function, the learning rate and parameter are hyperparameters. Although they are not directly related to the establishment of the Deep Convolutional neural network, the selection of hyperparameters and has a great relationship with the training speed and imaging effect.
To speed up the training process of a Deep Convolutional neural network and optimize the selection of hyperparameters, the Contrast Source Inversion is combined with the cost function of a Deep Convolutional neural network. The core of the Contrast Source Inversion is to minimize the objective function [29,30], which is in the form of
(14)
The aim of the Deep Convolutional neural network is to acquire the appropriate weight and bias by minimizing the cost function, so the normalization of the Contrast Source Inversion is introduced into the Deep Convolutional neural network and the value of the hyperparameter is determined by the mechanism of information of the Contrast Source Inversion. Formally, the value of is changed from a constant value to a dynamic real value determined by the Contrast Source Inversion to form a Jointly-Driven deep learning network. Therefore, the cost–function form of a model-driven deep learning network is:
(15)
In (15), represents group of data and the sum of the binary norm of scattering field data obtained by K receivers in each group of data. Although is known, in each group of training data is not equal, so is a dynamic real value. is a constant weighting coefficient. In addition to preventing the Deep Convolutional neural network from over-fitting, can also prevent abnormal training data input from causing mutation, thus reducing the training effect of Deep Convolutional neural network.
3.2. Analysis and Optimization of Weights
To speed up training, weights and bias are optimized. Traditional convolutional networks are selected by Gaussian random variables, and the mean of normalized Gaussian distribution is 0 and the standard deviation is 1 [31]. Since , z itself obeys the Gaussian distribution after initializing ω and b using gaussian random variables, as shown in Figure 3.
As can be seen in Figure 3, the slope of the curve is small and the overly large input values make the output data, almost saturated. This characteristic makes the weight updating process change very slowly. To improve the learning speed of the network, this paper sets the mean of the Gaussian random distribution to 0 and the standard deviation to , where is the number of input neurons. The improved distribution is shown in Figure 4.
3.3. Super-Resolution Network-Assisted Imaging
When the proportion of pest communities in the cross-section of trees is less than 10%, the inversion results will be noisy. If the proportion of the pest community in the cross-section of the tree becomes smaller and smaller, the noise of the inversion image becomes denser and denser. To solve this problem, an image optimization and image reconstruction technology is introduced: Real-ESRGAN [32]. The Super-Resolution algorithm can not only improve the resolution of the inversion image but also optimize the image quality and remove the dryness of the image. Real-ESRGAN uses a U-Net discriminator with spectral normalization [33,34,35], which can identify where the noise is and where the pest community is located. Its network structure is shown in Figure 5.
The U-Net discriminator focuses not only on the image as a whole but also on the characteristics of each part, the most important for inverse imaging being the shape and position of the scatter.
3.4. Evaluation Indicators
The results are evaluated using Mean Square Error (MSE) and Image Intersection Over Union (IOU) as a criterion for the accuracy of our single inversion images [36], where IOU is defined as
(16)
is the area of internal pests in the scatterer in the inversion diagram and is the area of the internal pest community set in the living tree–borers model. It ·is specified that a single pest detection is judged to be accurate when ; ideally, . To test the performance of the algorithm, this paper sets up multiple groups of experimental data to test the algorithm and calculates the accuracy rate of the algorithm in different detection environments. The accuracy rate formula is as follows.
(17)
represents the detection accuracy of the algorithm for all test data, is the number of all test results that are judged to be accurate, and is the total number of test data.
4. Experiments and Results Analysis
4.1. Experiment Condition
Considering that the detection of trees is mostly carried out outdoors, in order to better simulate the noise interference in the field detection in real life, a method of artificially adding interference factors was proposed to simulate the detection of trees in the field. The noise construction equation is shown below.
(18)
where is our constructed noise scattered field data, is the scattered field data received by the receiver, is the number of transmitters, is the number of receivers, and is the noise matrix, which is structured as follows:(19)
is the noise factor, which can be changed by changing the value of , and is the matrix. Our final input scattered field data for training is(20)
To better describe the ratio of pest community size to the cross-sectional area of wood, we call it , and its formula is as follows:
(21)
represents the radius of the pest community in m and is the radius of the living tree in meters.This section uses simulated electromagnetic inverse scattering data to validate our proposed method with the simulation model parameters set as shown in the Table 2 below.
Our group used finite element simulations to obtain 18,000 sets of scattering data in each of the two states. To ensure generalization, we selected a random sample of 15,000 sets of data in the dataset to train our proposed combination of model-driven and data-driven methods. Another 500 sets of data in the remaining 3000 sets were randomly selected for testing.
Within 2 m 2 m square areas, white indicates air; brown is a two-dimensional cross-section of the xylem, a circle of radius 0.6 m with a relative permittivity of 7; and black indicates an internal trunk pest community with a relative permittivity of 60.
4.2. Imaging Experiment of Insect Hole in Living Trees
4.2.1. Relationship between Accuracy and
Using the Deep Convolutional algorithm, experiments were carried out on four models with different sizes but the same Inversion Ratio. The results are shown in Figure 6.
The accuracy of the inversion results was evaluated according to Formula (16). The results are shown in Table 3, which demonstrates that under the condition of maintaining the , changing the radius will not affect the accuracy value.
4.2.2. Detection of Pest Communities with Different Values
The Contrast source Inversion, Deep Convolution algorithm, Joint-Driven algorithm, and Joint-Driven Super-Resolution algorithm were used to detect pest communities with different . The results are shown in Figure 7 and Figure 8. The is calculated according to Formula (16) to evaluate the accuracy of inversion results, and the results are shown in Table 4.
Figure 7, Figure 8 and Figure 9 respectively calculate the of six different inversion results by the four algorithms, and the values are shown in Table 4.
4.3. Discussion of Experimental Results
When , the Contrast Source Inversion cannot reverse a regular graph, resulting in = error. When the inversion ratio was 1:30 to 1:50, the pest community size retrieved by the Contrast Source Inversion algorithm was the same. When and , the Contrast Source Inversion algorithm imaging produces a certain “halo.” With a threshold value of 80% of the highest relative dielectric constant value in the imaging image, only the red part of the image can be observed.
Regarding the Deep Convolutional algorithm: when , its inversion insect pest community size is unchanged, and its algorithm is insensitive to size change, which is prone to error.
When the Joint-Driven algorithm inverts the model of , the inversion results are clear and accurate. Moreover, the Joint-Driven algorithm can accurately reverse the size of the pest community. Although there are interference points in the inversion results, the location of the pest community can also be distinguished.
The Joint-Driven Super-Resolution algorithm can not only de-noise inverted images, but also further highlights the position of the pest community, further improves the inversion effect, and is more conducive to judging the internal conditions of trees by further optimizing the image.
Figure 10 demonstrates the single–partial pest detection process because when the is between 1:30 and 1:60, Contrast Source Inversion cannot correctly show the shape of the scatter. The Contrast Source Inversion, Deep Convolution, and Joint-Driven methods are compared to analyze the stability of model-driven deep learning networks in the iterative process.
Figure 10a shows that the Contrast Source Inversion tends to stabilize after around 500 iterations, with the number of iterations being higher and more time-consuming. Figure 10b shows that the Deep Convolutional algorithm needs to iterate about 350 times to reach the stable range, and the training process is not smooth enough. Figure 10c shows that the Joint-Driven algorithm only needs about 60 iterations to reach the stable state, which greatly reduces the number of iterations, improves the imaging time, and smooths the training process.
According to Equation (17), the detection accuracy statistics of each inversion algorithm were carried out separately for each of the three radii, and Figure 11 shows the detection accuracy of each algorithm in 300 sets of single-pest inversion tests.
As can be seen from Figure 11, the Contrast Source Inversion can only invert the living tree burrow scatters with an greater than 1:30 and its application range is limited. Although the Deep Convolutional network algorithm can invert living tree burrow scatters with an , its accuracy is only about 70%, which cannot meet the actual requirements. The Joint-Driven algorithm proposed in this paper not only solves the problem of imaging the wood cross-sectional area with a small insect colony proportion, but it also has an inversion accuracy of up to 90% after the optimization of a specially-trained super-resolution network.
To sum up, the four methods used are summarized and the results are shown in the following Table 5.
When and , the Contrast Source Inversion algorithm can reverse meaningful results. If the continues to change, the results generated by the Contrast Source Inversion algorithm will lose their reference value. Therefore, the calculation error is represented by ‘Non’.
5. Conclusions
In this paper, a model-driven deep learning Super-Resolution inversion algorithm is proposed to solve the problem of high noise and poor imaging in electromagnetic wave detection of tree pest communities. By studying the propagation process of electromagnetic waves in scatters and combining the advantages of the Contrast Source Inversion, Deep Convolutional network, and Super-Resolution network, a Joint-Driven Super-Resolution algorithm is proposed. This algorithm overcomes the problem that the selection of neural network structure and the process of parameter optimization depend too much on the experience of the experimenter by introducing the Contrast Source Inversion, so that the neural network can better fit the nonlinear problem and overcome the ill-posed problem by learning a large amount of data. The experiments are carried out by continuously reducing the radius of the model pest community and using the Contrast Source Inversion, the Convolutional Network algorithm, the Joint-Driven algorithm, and the Joint-Driven Super-Resolution algorithm. Our solution to the problem of imaging tiny high-contrast scatters is provided.
Conceptualization, J.S. (Jiayin Song), J.S. (Jie Shi), H.Z. (Hongwei Zhou), W.S., H.Z. (Hongju Zhou) and Y.Z.; methodology, J.S. (Jie Shi) and H.Z. (Hongju Zhou); software, J.S. (Jie Shi); validation, J.S. (Jiayin Song), J.S. (Jie Shi), H.Z. (Hongwei Zhou) W.S., H.Z. (Hongju Zhou) and Y.Z.; formal analysis, J.S. (Jie Shi); investigation, H.Z. (Hongwei Zhou), W.S. and Y.Z.; resources, J.S. (Jiayin Song); data curation, H.Z. (Hongwei Zhou) and W.S.; writing—original draft preparation, J.S. (Jie Shi); writing—review and editing, J.S. (Jiayin Song) and H.Z. (Hongwei Zhou); visualization, J.S. (Jie Shi) and H.Z. (Hongwei Zhou); supervision, J.S. (Jiayin Song) and H.Z. (Hongwei Zhou); project administration, H.Z. (Hongwei Zhou); funding acquisition, H.Z. (Hongwei Zhou). All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
Not applicable.
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 5. The architecture of the U-Net discriminator with spectral normalization.
Figure 6. Inversion images with the same inversion ratio and different sizes: (a) 0.04 m radius of pest community; 0.4 m radius of the living tree, (b) 0.05 m radius of pest community; 0.5 m radius of the living tree, (c). 0.07 m radius of pest community; 0.7 m radius of the living tree, and (d) 0.08 m radius of pest community; 0.8 m radius of the living tree.
Figure 7. Inversion results of the Inversion Ratio from 1:10 to 1:20. (a) Model diagram of [Forumla omitted. See PDF.] = 1:10, (b) model diagram of [Forumla omitted. See PDF.] = 1:20, (c) Contrast Source Inversion results of [Forumla omitted. See PDF.] = 1:10, (d) Contrast Source Inversion results of [Forumla omitted. See PDF.] = 1:20, (e) Deep Convolutional Inversion results of [Forumla omitted. See PDF.] = 1:10, (f) Deep Convolutional Inversion results of [Forumla omitted. See PDF.] = 1:20, (g) joint-Driven inversion results of [Forumla omitted. See PDF.] = 1:10, (h) joint-Driven inversion results of [Forumla omitted. See PDF.] = 1:20, (i) Joint-Driven super-resolution inversion results of [Forumla omitted. See PDF.] = 1:10, and (j) Joint-Driven super-resolution inversion results of [Forumla omitted. See PDF.] = 1:20.
Figure 7. Inversion results of the Inversion Ratio from 1:10 to 1:20. (a) Model diagram of [Forumla omitted. See PDF.] = 1:10, (b) model diagram of [Forumla omitted. See PDF.] = 1:20, (c) Contrast Source Inversion results of [Forumla omitted. See PDF.] = 1:10, (d) Contrast Source Inversion results of [Forumla omitted. See PDF.] = 1:20, (e) Deep Convolutional Inversion results of [Forumla omitted. See PDF.] = 1:10, (f) Deep Convolutional Inversion results of [Forumla omitted. See PDF.] = 1:20, (g) joint-Driven inversion results of [Forumla omitted. See PDF.] = 1:10, (h) joint-Driven inversion results of [Forumla omitted. See PDF.] = 1:20, (i) Joint-Driven super-resolution inversion results of [Forumla omitted. See PDF.] = 1:10, and (j) Joint-Driven super-resolution inversion results of [Forumla omitted. See PDF.] = 1:20.
Figure 8. Inversion results of Inversion Ratio from 1:30 to1:40. (a) Model diagram of [Forumla omitted. See PDF.] = 1:30, (b) model diagram of [Forumla omitted. See PDF.] = 1:40, (c) Contrast Source Inversion results of [Forumla omitted. See PDF.] = 1:30, (d) Contrast Source Inversion results of [Forumla omitted. See PDF.] = 1:40, (e) Deep Convolutional Inversion results of [Forumla omitted. See PDF.] = 1:30, (f) Deep Convolutional Inversion results of [Forumla omitted. See PDF.] = 1:40, (g) Joint-Driven inversion results of [Forumla omitted. See PDF.] = 1:30, (h) Joint driven inversion results of [Forumla omitted. See PDF.] = 1:40, (i) Joint-Driven Super-resolution inversion results of [Forumla omitted. See PDF.] = 1:30, and (j) Joint-Driven Super-resolution inversion results of [Forumla omitted. See PDF.] = 1:40.
Figure 8. Inversion results of Inversion Ratio from 1:30 to1:40. (a) Model diagram of [Forumla omitted. See PDF.] = 1:30, (b) model diagram of [Forumla omitted. See PDF.] = 1:40, (c) Contrast Source Inversion results of [Forumla omitted. See PDF.] = 1:30, (d) Contrast Source Inversion results of [Forumla omitted. See PDF.] = 1:40, (e) Deep Convolutional Inversion results of [Forumla omitted. See PDF.] = 1:30, (f) Deep Convolutional Inversion results of [Forumla omitted. See PDF.] = 1:40, (g) Joint-Driven inversion results of [Forumla omitted. See PDF.] = 1:30, (h) Joint driven inversion results of [Forumla omitted. See PDF.] = 1:40, (i) Joint-Driven Super-resolution inversion results of [Forumla omitted. See PDF.] = 1:30, and (j) Joint-Driven Super-resolution inversion results of [Forumla omitted. See PDF.] = 1:40.
Figure 9. Inversion results of Inversion Ratio from 1:50 to 1:60. (a) Model diagram of [Forumla omitted. See PDF.] = 1:50. (b) Model diagram of [Forumla omitted. See PDF.] = 1:60. (c) Contrast Source Inversion results of [Forumla omitted. See PDF.] = 1:50. (d) Contrast Source Inversion results of [Forumla omitted. See PDF.] = 1:60. (e) Deep Convolutional Inversion results of [Forumla omitted. See PDF.] = 1:50. (f) Deep Convolutional Inversion results of [Forumla omitted. See PDF.] = 1:60. (g) Joint driven inversion results of [Forumla omitted. See PDF.] =1:50 (h) Joint driven inversion results of [Forumla omitted. See PDF.] =1:60 (i) Joint-Driven Super-Resolution inversion results of [Forumla omitted. See PDF.] = 1:50. (j) Joint-Driven Super-Resolution inversion results of [Forumla omitted. See PDF.] = 1:60.
Figure 9. Inversion results of Inversion Ratio from 1:50 to 1:60. (a) Model diagram of [Forumla omitted. See PDF.] = 1:50. (b) Model diagram of [Forumla omitted. See PDF.] = 1:60. (c) Contrast Source Inversion results of [Forumla omitted. See PDF.] = 1:50. (d) Contrast Source Inversion results of [Forumla omitted. See PDF.] = 1:60. (e) Deep Convolutional Inversion results of [Forumla omitted. See PDF.] = 1:50. (f) Deep Convolutional Inversion results of [Forumla omitted. See PDF.] = 1:60. (g) Joint driven inversion results of [Forumla omitted. See PDF.] =1:50 (h) Joint driven inversion results of [Forumla omitted. See PDF.] =1:60 (i) Joint-Driven Super-Resolution inversion results of [Forumla omitted. See PDF.] = 1:50. (j) Joint-Driven Super-Resolution inversion results of [Forumla omitted. See PDF.] = 1:60.
Figure 10. Number of algorithm iterations versus MSE. (a) Contrast Source Inversion iteration process, (b) Deep Convolutional network training process, and (c) the Joint-Driven algorithm trains the iterative process.
Summary of the advantages and disadvantages of algorithms.
Algorithm | Algorithm Defects | Algorithm Advantages |
---|---|---|
CSI | Sensitive initial value; slow convergence speed; unable to process large-scale data | Iterative solution; not involving solving the positive problem |
SOM | Sensitive initial value; unable to process large-scale data; large amount of calculation | Reduces CSI solving dimension; improves solving speed and success probability |
CNN | Needs a lot of data training; shifts the computational burden to the learning stage | No physical modeling required |
Simulation parameter settings.
Parameter Name | Parameter Values | Parameter Name | Parameter Values |
---|---|---|---|
Domain of solution | 2 m |
The relative permittivity of pest community | 60 |
The radius of a living tree | 0.6 m | Air resistance | 120 |
|
1:10~1:60 | Number of electromagnetic emitters | 32 |
Electromagnetic frequency | 200 MHz~700 MHz | Number of electromagnetic wave receivers | 32 |
The relative permittivity of air | 1 | The internal relative dielectric constant of tree | 7 |
Noise factor |
0.2 |
|
0.976 | 0.975 | 0.977 | 0.979 |
Contrast Source Inversion | Deep Convolutional Inversion | Joint Driven Inversion | Joint-Driven Super-Resolution Inversion | |
---|---|---|---|---|
1:10 | 0.885 | 0.954 | 1 | 1 |
1:20 | 0.757 | 0.775 | 1 | 1 |
1:30 | 0.541 | 0.851 | 0.955 | 0.984 |
1:40 | 0.432 | 0.653 | 0.953 | 0.982 |
1:50 | 0.325 | 0.773 | 0.958 | 0.988 |
1:60 | error | 0.763 | 0.954 | 0.986 |
Comparison by the Four Methods.
Contrast Source Inversion | Deep Convolutional Inversion | Joint-Driven Inversion | Joint-Driven Super-Resolution Inversion | |||
---|---|---|---|---|---|---|
Iteration |
500 | 350 | 60 | 60 | ||
|
1:10 | Maximum |
11.1% | 6.6% | 3.3% | 2.5% |
Minimum error | 7.3% | 3.6% | 1.5% | 1.3% | ||
1:20 | Maximum |
23.3% | 15.5% | 7.2% | 3.3% | |
Minimum error | 15.4% | 10.2% | 4.5% | 1.5% | ||
1:30 | Maximum |
Non | 17.5% | 8% | 4.2% | |
Minimum error | Non | 14.3% | 4.8% | 2.3% | ||
1:40 | Maximum |
Non | 25.8% | 8.6% | 4.9% | |
Minimum error | Non | 22.3% | 5.1% | 2.8% | ||
1:50 | Maximum |
Non | 36.4% | 9.2% | 5.6% | |
Minimum error | Non | 28.5% | 5.5% | 3.7% | ||
1:60 | Maximum |
Non | 41.2% | 10.1% | 6.5% | |
Minimum error | Non | 33.3% | 6.5% | 4.3% |
References
1. Yin, Q.; Liu, H.-H. Drying stress and strain of wood: A Review. Appl. Sci.; 2021; 11, 5023. [DOI: https://dx.doi.org/10.3390/app11115023]
2. Ji, B.; Zhang, Q.; Cao, J.; Zhang, B.; Zhang, L. Delamination detection in bimetallic composite using laser ultrasonic bulk waves. Appl. Sci.; 2021; 11, 636. [DOI: https://dx.doi.org/10.3390/app11020636]
3. Ansari, M.S.; Bartos, V.; Lee, B. Shallow and Deep Learning Approaches for Network Intrusion Alert Prediction. Procedia Comput. Sci.; 2020; 171, pp. 644-653. [DOI: https://dx.doi.org/10.1016/j.procs.2020.04.070]
4. Du, X.; Li, S.; Li, G.; Feng, H.; Chen, S. Stress wave tomography of wood internal defects using ellipse-based spatial interpolation and velocity compensation. BioResources; 2015; 10, pp. 3948-3962. [DOI: https://dx.doi.org/10.15376/biores.10.3.3948-3962]
5. Taskhiri, M.S.; Hafezi, M.H.; Harle, R.; Williams, D.; Kundu, T.; Turner, P. Ultrasonic and thermal testing to non-destructively identify internal defects in plantation eucalypts. Comput. Electron. Agric.; 2020; 173, 105396. [DOI: https://dx.doi.org/10.1016/j.compag.2020.105396]
6. Du, X.; Feng, H.; Hu, M.; Fang, Y.; Chen, S. Three-dimensional stress wave imaging of wood internal defects using TKriging method. Comput. Electron. Agric.; 2018; 148, pp. 63-71. [DOI: https://dx.doi.org/10.1016/j.compag.2018.03.005]
7. Mousavi, M.; Taskhiri, M.S.; Holloway, D.; Olivier, J.; Turner, P. Feature extraction of wood-hole defects using empirical mode decomposition of ultrasonic signals. NDT E Int.; 2020; 114, 102282. [DOI: https://dx.doi.org/10.1016/j.ndteint.2020.102282]
8. Ligong, P.; Rodion, R.; Sergey, K. Artificial neural network for defect detection in CT images of wood. Comput. Electron. Agric.; 2021; 187, 106312.
9. Kaczmarek, R.G.; Bednarek, D.R.; Wong, R.; Kaczmarek, R.V.; Rudin, S.; Alker, G. Potential radiation hazards to personnel during dynamic CT. Radiology; 1986; 161, 853. [DOI: https://dx.doi.org/10.1148/radiology.161.3.3786746]
10. Merela, M.; Oven, P.; Sepe, A.; Serša, I. Three-dimensional in vivo magnetic resonance microscopy of beech (Fagus sylvatica L.) wood. Biol. Med.; 2005; 18, pp. 171-174. [DOI: https://dx.doi.org/10.1007/s10334-005-0109-5]
11. Winters, D.W.; Van Veen, B.D.; Hagness, S.C. A sparsity regularization approach to the electromagnetic inverse scattering problem. IEEE Trans. Antennas Propag.; 2009; 58, pp. 145-154. [DOI: https://dx.doi.org/10.1109/TAP.2009.2035997] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/20419046]
12. Kaipio, J.P.; Huttunen, T.; Luostari, T.; Lähivaara, T.; Monk, P.B. A Bayesian approach to improving the Born approximation for inverse scattering with high-contrast materials. Inverse Probl.; 2019; 35, 084001. [DOI: https://dx.doi.org/10.1088/1361-6420/ab15f3]
13. Tajik, D.; Kazemivala, R.; Nikolova, N.K. Real-time imaging with simultaneous use of born and Rytov approximations in quantitative microwave holography. IEEE Trans. Microw. Theory Tech.; 2021; 70, pp. 1896-1909. [DOI: https://dx.doi.org/10.1109/TMTT.2021.3131227]
14. Shah, P.; Chen, G.; Moghaddam, M. Learning nonlinearity of microwave imaging through deep learning. Proceedings of the 2018 IEEE International Symposium on Antennas and Propagation & USNC/URSI National Radio Science Meeting; Boston, MA, USA, 8–13 July 2018; pp. 699-700.
15. Leijsen, R.; Fuchs, P.; Brink, W.; Webb, A.; Remis, R. Developments in electrical-property tomography based on the contrast-source inversion method. J. Imaging; 2019; 5, 25. [DOI: https://dx.doi.org/10.3390/jimaging5020025]
16. Pasha, M.; Kupis, S.; Ahmad, S.; Khan, T. A Krylov subspace type method for Electrical Impedance Tomography. ESAIM Math. Model. Numer. Anal.; 2021; 55, pp. 2827-2847. [DOI: https://dx.doi.org/10.1051/m2an/2021057]
17. Vrahatis, M.; Magoulas, G.; Plagianakos, V. From linear to nonlinear iterative methods. Appl. Numer. Math.; 2003; 45, pp. 59-77. [DOI: https://dx.doi.org/10.1016/S0168-9274(02)00235-0]
18. Rekanos, I.T. Neural-network-based inverse-scattering technique for online microwave medical imaging. IEEE Trans. Magn.; 2002; 38, pp. 1061-1064. [DOI: https://dx.doi.org/10.1109/20.996272]
19. Anaraki, A.K.; Ayati, M.; Kazemi, F. Magnetic resonance imaging-based brain tumor grades classification and grading via convolutional neural networks and genetic algorithms. Biocybern. Biomed. Eng.; 2019; 39, pp. 63-74. [DOI: https://dx.doi.org/10.1016/j.bbe.2018.10.004]
20. Desmal, A.; Bağcı, H. Shrinkage-thresholding enhanced Born iterative method for solving 2D inverse electromagnetic scattering problem. IEEE Trans. Antennas Propag.; 2014; 62, pp. 3878-3884. [DOI: https://dx.doi.org/10.1109/TAP.2014.2321144]
21. Lim, J.; Psaltis, D. MaxwellNet: Physics-driven deep neural network training based on Maxwell’s equations. Apl Photonics; 2022; 7, 011301. [DOI: https://dx.doi.org/10.1063/5.0071616]
22. Park, S.C.; Park, M.K.; Kang, M.G. Super-resolution image reconstruction: A technical overview. IEEE Signal Processing Mag.; 2003; 20, pp. 21-36. [DOI: https://dx.doi.org/10.1109/MSP.2003.1203207]
23. Svendsen, D.H.; Morales-Álvarez, P.; Ruescas, A.B.; Molina, R.; Camps-Valls, G. Deep Gaussian processes for biogeophysical parameter retrieval and model inversion. ISPRS J. Photogramm. Remote Sens.; 2020; 166, pp. 68-81. [DOI: https://dx.doi.org/10.1016/j.isprsjprs.2020.04.014] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32747851]
24. Sun, K.; Simon, S. Bilateral spectrum weighted total variation for noisy-image super-resolution and image denoising. IEEE Trans. Signal Processing; 2021; 69, pp. 6329-6341. [DOI: https://dx.doi.org/10.1109/TSP.2021.3127679]
25. Schneider, M. Lippmann-Schwinger solvers for the computational homogenization of materials with pores. Int. J. Numer. Methods Eng.; 2020; 121, pp. 5017-5041. [DOI: https://dx.doi.org/10.1002/nme.6508]
26. Caratelli, D.; Cicchetti, R.; Cicchetti, V.; Testa, O.; Faraone, A. Electromagnetic scattering from truncated thin cylinders: An approach based on the incomplete Hankel functions and surface impedance boundary conditions. Proceedings of the 2019 PhotonIcs & Electromagnetics Research Symposium-Spring (PIERS-Spring); Rome, Italy, 17–20 June 2019; pp. 1739-1742.
27. Peters, G.; Wilkinson, J.H. Inverse iteration, ill-conditioned equations and Newton’s method. SIAM Rev.; 1979; 21, pp. 339-360. [DOI: https://dx.doi.org/10.1137/1021052]
28. Ran, P.; Qin, Y.; Lesselier, D. Electromagnetic imaging of a dielectric micro-structure via convolutional neural networks. Proceedings of the 2019 27th European Signal Processing Conference (EUSIPCO); A Coruna, Spain, 2–6 September 2019; pp. 1-5.
29. Buehlmann, U.; Thomas, R.E. Impact of human error on lumber yield in rough mills. Robot. Comput. Integr. Manuf.; 2002; 18, pp. 197-203. [DOI: https://dx.doi.org/10.1016/S0736-5845(02)00010-8]
30. Hashim, U.R.; Hashim, S.Z.; Muda, A.K. Automated vision inspection of timber surface defect: A review. J. Teknol.; 2015; 77, [DOI: https://dx.doi.org/10.11113/jt.v77.6562]
31. Golilarz, N.A.; Demirel, H.; Gao, H. Adaptive generalized Gaussian distribution oriented thresholding function for image de-noising. Int. J. Adv. Comput. Sci. Appl.; 2019; 10, [DOI: https://dx.doi.org/10.14569/IJACSA.2019.0100202]
32. Wang, X.; Xie, L.; Dong, C.; Shan, Y. Real-esrgan: Training real-world blind super-resolution with pure synthetic data. Proceedings of the Proceedings of the IEEE/CVF International Conference on Computer Vision; Montreal, BC, Canada, 11–17 October 2021; pp. 1905-1914.
33. Schonfeld, E.; Schiele, B.; Khoreva, A. A u-net based discriminator for generative adversarial networks. Proceedings of the Proceedings of the IEEE/CVF conference on computer vision and pattern recognition; Seattle, WA, USA, 13–19 June 2020; pp. 8207-8216.
34. Yan, Y.; Liu, C.; Chen, C.; Sun, X.; Jin, L.; Peng, X.; Zhou, X. Fine-grained attention and feature-sharing generative adversarial networks for single image super-resolution. IEEE Trans. Multimed.; 2021; 24, pp. 1473-1487. [DOI: https://dx.doi.org/10.1109/TMM.2021.3065731]
35. Wiratama, W.; Lee, J.; Sim, D. Change detection on multi-spectral images based on feature-level U-Net. IEEE Access; 2020; 8, pp. 12279-12289. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.2964798]
36. Zhang, G.; Pan, Y.; Zhang, L. Semi-supervised learning with GAN for automatic defect detection from images. Autom. Constr.; 2021; 128, 103764. [DOI: https://dx.doi.org/10.1016/j.autcon.2021.103764]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Trunk pests have always been one of the most important species of tree pests. Trees eroded by trunk pests will be blocked in the transport of nutrients and water and will wither and die or be broken by strong winds. Most pests are social and distributed in the form of communities inside trees. However, it is difficult to know from the outside if a tree is infected inside. A new method for the non-invasive detecting of tree interiors is proposed to identify trees eroded by trunk pests. The method is based on electromagnetic inverse scattering. The scattered field data are obtained by an electromagnetic wave receiver. A Joint-Driven algorithm is proposed to realize the electromagnetic scattered data imaging to determine the extent and location of pest erosion of the trunk. This imaging method can effectively solve the problem of unclear imaging in the xylem of living trees due to the small area of the pest community. The Joint-Driven algorithm proposed by our group can achieve accurate imaging with a ratio of pest community radius to live tree radius equal to 1:60 under the condition of noise doping. The Joint-Driven algorithm proposed in this paper reduces the time cost and computational complexity of tree internal defect detection and improves the clarity and accuracy of tree internal defect inversion images.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer