1. Introduction
As one of the most important food crops for humans, potato is a significant source of carbohydrates, vitamins, and minerals, with an annual production of up to 370 million tons [1,2], but because of the complex growing environment, potato is susceptible to diseases during growth [3]. For example, blackleg and soft tuber rot are significant bacterial diseases associated with potatoes worldwide [3,4,5]. These pathogens are plant-pathogenic bacteria from the genus Pectobacterium [3]. They produce enzymes that cause the decay of plant tissues [6], leading to damage to roots, stems, and leaves, resulting in a severe reduction in yield and storability [7,8,9,10]. Such bacteria can remain latent during plant growth until favorable conditions for their development, reproduction, and infection prevail [11]. Once pest and disease toxicity occur, they not only cause losses to agricultural production but also significantly impact human health and the ecological environment [12,13,14,15,16]. Therefore, timely and accurate detection and identification of potato diseases is essential to maintain crop yield and quality.
To detect crop disease, traditional methods are based on manual visual inspection and human empirical analysis, which cannot meet the need for rapid and accurate detection of potato diseases [17]. To accurately identify potato diseases and achieve disease control, management, and prevention, the most popular approach is to combine machine learning and image classification methods with multiple imaging techniques for disease detection of plants [18,19,20,21]. However, traditional image-based classification methods cannot identify diseases that are difficult to detect beyond RGB images because they only consider image information and lack depth data features [22].
Hyperspectral imaging has emerged as a crucial technique in recent years, providing valuable spectral and spatial information for potato disease detection and identification [23,24,25]. The combination of hyperspectral image techniques, preprocessing methods, and deep learning convolutional neural networks has proved effective in detecting potato late blight [26]. Other researchers have used multispectral image systems to detect plant growth in a noninvasive manner [27], while the Cube CNN SVM (CCS) method has been shown to improve spectral image classification by extracting high-level features directly from raw data [28]. Previous studies have also shown that 3D-CNN can achieve better classification accuracy than 2D-CNN without preprocessing [29]. Multiscale wavelets, combined with in-depth feature information extracted by 3D-CNN, can generate super-resolution hyperspectral images from low-resolution ones [30]. However, initial 3D-CNN networks tend to suffer from overfitting and higher training costs, necessitating more hardware resources and training time, resulting in poor generalization of the overall network model [31]. To address these issues, a combined 2D–3D model approach can extract both spatial and spectral features, resulting in better fusion features for hyperspectral image classification (HSIC) [32], while reducing the network structure parameters [33].
Computer vision application in agriculture has become an alternative solution to manual detection [34]. Polder et al. designed a hyperspectral line scan device for virus damage detection in different potatoes [35]. They demonstrated that a deep learning approach improved the accuracy of real-world potato disease detection. Hyperspectral imaging is a valuable tool for disease detection in various crops from different angles (tissue to canopy) [36]. Atherton et al. [37,38] used hyperspectral remote sensing to detect disease in potato plants; they only used spectral information but not imaging sensors. Ray et al. used a point-spectrum approach without considering spatial information [39]. Hu et al. successfully detected late blight on potato leaves using hyperspectral imaging to improve disease recognition [40]. Griffel et al. [41] used SVM to classify spectral features of potato plants infected with PVY obtained with a handheld device with recognition accuracy close to 90%. Kang et al. proposed a lightweight convolutional neural network model [42] that could identify potato leaves with three different diseases, reducing the number of parameters and improving accuracy. Shi et al. proposed a novel end-to-end deep learning model (CropdocNet) [43] for accurate and automated late blight diagnosis based on UAV-based hyperspectral images with an average accuracy of 98.09% on the test dataset. Gao et al. [44], based on high-resolution field-of-view images and deep learning algorithms to extract late blight spots from unstructured field environments, demonstrated that unbalanced weighting of lesion and background categories could improve segmentation performance. Qi et al. [45] proposed a deep collaborative attention network (PLB-2D-3D-A) by combining 2D convolutional neural network (2D-CNN) and 3D-CNN with a deep collaborative attention network (PLB-2D-3D-A) for hyperspectral deep learning classification architecture for images, showing promising results for early detection of potato late blight by deep learning and near-end hyperspectral imaging. Chen et al. [46] proposed a weakly supervised learning approach to identify potato plant diseases by extracting high-dimensional features through a hybrid attention mechanism.
Although potato disease detection technology has advanced significantly, there remain some challenges that impede the accurate and rapid identification of diseases. One such obstacle is the variety of potato diseases, which often present with similar symptoms, making them difficult to differentiate. Additionally, the complexity of diseases, which can result from a range of factors such as genetic and environmental conditions, further exacerbates this issue. Moreover, while 3D convolutional neural networks are commonly used for processing hyperspectral data, they are known to have high hardware requirements, and the accuracy of 1D convolutional neural networks for hyperspectral data is often suboptimal. Furthermore, numerous factors such as light, noise, distortion, and color changes present further challenges to disease detection, underscoring the need for increased algorithmic robustness and repeatability.
To address these issues, this paper proposes a novel network architecture that leverages 1D, 2D, and 3D convolutional neural networks [47] in a multidimensional fusion approach. The network uses dilated convolution [48,49,50] for feature extraction, which avoids data loss and increases the perceptual field compared to the conventional convolution-pooling layer in CNNs. The proposed convolutional operation in different dimensions takes full advantage of hyperspectral data’s spectral and spatial information, reducing network parameters and improving the model’s generalization and classification accuracy. The purpose of this paper is the following:
(1) To address the current problems of potato diseases that can cause serious harm to human health and crop yield and economic losses, we use deep learning technology to provide a new solution for detecting potato diseases to ensure their product and healthy growth. (2) By analyzing the existing technologies for potato disease detection, we innovatively propose a multidimensional fusion Atrous-CNN architecture to solve the problems of insufficient accuracy, low disease recognition rate, high hardware resource consumption, and data loss of current detection technologies. Testing the proposed model on multiple datasets confirmed that it has good detection capability and reduces hardware consumption, which to a large degree meets the current needs of potato disease detection.
2. Materials and Methods
2.1. Data Acquisition and Preprocessing
The hyperspectral data were collected at the potato demonstration base of Chahar Right Wing Banner in Hohhot, and the camera used was a new handheld hyperspectral camera Specim IQ. The resolution of the hyperspectral camera was 512 × 512 pixels, and the total number of bands collected was 204, with a spectral range of 400–1000 nm and a spectral resolution of 7 nm. In this study, potato leaves were picked and photographed in the laboratory with the Specim IQ hyperspectral camera to address the problem that hyperspectral cameras are susceptible to environmental interference during photography. During the shooting, the white plate and the leaf were photographed simultaneously to eliminate the environmental mismatch, the integration time of the hyperspectral camera was adjusted to 5 ms, and the shooting height was 20 cm from the leaf height. A total of 126 hyperspectral potato disease data points, including 49 leaf blight, 28 anthracnoses, 7 early blight, and 42 mixed hyperspectral images of three different diseases, were obtained. For data with mixed pixels, the pixels within a region are labeled as the same category based on the artificial region calibration of the RGB image at the time of acquisition, so the data with mixed pixels will have multiple disease category labels. Still, these disease category labels will not be pursued for disease species when they belong to the same disease category at the first classification. When performing the disease category identification, the second classification task needs to be completed based on the specific disease category labels calibrated and combined with the determination of the corresponding spectral information found on the disease pixels. The potato disease leaves taken with the Specim IQ hyperspectral camera are shown in Figure 1.
2.2. Methods
2.2.1. Method of Label Category Selection
When using multidimensional Atrous-CNN for feature extraction, the input data size is (7 × 7 × 204), among which the hyperspectral data spatial information size is (7 × 7), and this size is beneficial to reducing the loss of edge feature data with the number of spectral information bands of 204. The label attributes are the categories of central pixel locations, as shown in Figure 2a.
In hyperspectral imaging technology, defining precise attributes at the edges of the data is often difficult, which poses a challenge to disease detection and identification. In order to solve this problem, this paper proposes a mirror extension method. This method mirrors the pixel values at the edges symmetrically and places the edge pixels at the center to extend the information of the data at the edges. The specific operation is shown in Figure 2b.
The specific implementation of the mirror extension method is accomplished by symmetrically complementing the neighboring pixel values of the pixels at the edges. Specifically, for the pixels at the edges, the complementary values are selected from their neighboring pixels closer in the distance and spectrally similar to the original pixels. Therefore, the complementary pixel values can better preserve the original pixels’ features and increase the amount of available information in the data.
2.2.2. Atrous-CNN
In a conventional CNN, convolution and pooling operations extract data features. However, due to downsampling in the pooling layer, some feature information of the data needs to be recovered. This problem is especially serious in hyperspectral data, which contains rich information, and pooling causes some information loss.
This paper proposes a new approach to solve this problem: using a null convolution layer instead of the conventional convolution pooling operation. The null convolution structure is very simple and based on the zero-padding principle in regular convolution. Compared with the regular convolutional layer, the hollow convolutional layer can maintain no loss of data information in the response layer and substantially increase the perceptual field of the convolutional computation.
The formula for calculating atrous convolution is as follows:
(1)
In this equation, stands for the input vector, for the value of the output vector o at position i. The dilation rate of the atrous convolution is r, the convolution size is w, and the total number of convolutions between the vectors and w is j. Formula (1) shows that the hole convolution is filled with times of 0 adjacent to the conventional convolution. When r is 1, the hole convolution is equivalent to the conventional convolution, indicating that there is no convolution expansion. The atrous convolution receptive field calculation formula is as follows:
(2)
where is the receptive field of the convolution kernel in the current convolution layer and k is the size of the convolution kernel, represents the actual size of the convolution kernel after expansion, and the number of holes is d. The product of all previous layer steps is represented by , and the step size of each layer is represented by Stride.2.2.3. Multidimensional Fusion Atrous-CNN
Figure 3 shows the multidimensional fusion Atrous-CNN structure, in which the hyperspectral size data (7 × 7 × 204) is input into the network model in the first step. In the second step, the “space-spectrum” features of the hyperspectral data are extracted using the 3D-CNN part, which includes three 3D convolutional layers and one 3D top pooling layer, where the size of the convolutional kernel in the 3D convolutional layer is (8 × 3 × 3 × 3). The size of the pooling window in the pooling layer is (2 × 2 × 4). The pooling step is (1,1,2). In the third step, the output of 3D-CNN is adjusted from (7 × 7 × 102 × 8) to (7 × 7 × 816), which is used as the input of 2D-CNN. Thus, 2D-CNN is used to extract spatial information from hyperspectral data using 2D-hole convolution, where the convolution kernel size in the 2D-hole convolution layer is (8 × 3 × 3) and the expansion rate of 2D-hole convolution is (2, 2). The fourth step is to adjust the output of the 2D-CNN part (3 × 3 × 8) to (72 × 1) as the input of the 1D-CNN part. The 1D-CNN performs feature extraction of the spectral information of the hyperspectral data using the 1D-hole convolution, where the size of the convolution kernel in the 1D-hole convolution is (16 × 3). The dilation rate of the 1D-hole convolution is 2. The fifth step is to tile the output of the 1D-CNN part. Output is tiled, expanded, and connected to the Dropout layer (20–22) to avoid overfitting the network model. Finally, the Dropout layer is connected to two fully connected layers (Dense). The activation function of the second Dense layer is set to Softmax and used as the output layer of the whole network. The distribution of specific network parameters is shown in Table 1.
2.2.4. Leaf Pixel Classification Based on Multidimensional Fusion Atrous-CNN
In conventional hyperspectral image processing, the conventional 1D-CNN network can only process the spectral information of hyperspectral data while ignoring the spatial information of hyperspectral data. Although 3D-CNN-based networks can synthesize hyperspectral data’s spatial and spectral information, the model structure is complex and requires high hardware consumption. Figure 4 shows the process of CNN fusion in three dimensions—1D-CNN, 2D-CNN, and 3D-CNN. The network can effectively utilize the feature information of hyperspectral data extracted from three different dimensional CNNs with higher recognition accuracy and can further reduce hardware consumption. In the data fusion process, this paper utilizes the reshape method to adjust the dimensionality of the data and achieves the fusion of the data by connecting the CNNs of two dimensions.
As shown in Figure 4, the multidimensional fusion Atrous-CNN makes full use of the spatial and spectral information of the hyperspectral data. In the 3D-CNN part, the “null-spectral” information of the hyperspectral data is extracted using the 3D convolution-pooling operation, with a feature size of (7 × 7 × 102) and many features of 8. In the 2D-CNN part, the hyperspectral data’s spatial information is extracted using the 2D-hole convolution operation with the feature size of (3 × 3) and the number of features of 8. In the 1D-CNN part, the spectral information of the hyperspectral data is extracted by using the 1D-hole convolution operation with a feature size of 68 and many features of 16.
Figure 5 shows the structural comparison of the three CNNs. As noted, 3D-CNN uses only 3D convolution (Conv3D) and 3D maximum pooling (MaxPooling3D) for feature extraction of hyperspectral data. Due to the large Conv3D computation with high hardware consumption during model training, the multidimensional fusion CNN and Atrous-CNN use 2D-CNN and 1D-CNN for feature extraction in the intermediate layer to reduce computational loss. Among the feature extraction methods in the middle layer, multidimensional fused CNN utilizes convolution-pooling operation and Multidimensional fusion Atrous-CNN with specific feature extraction capability. In the last two layers (D1, out) of the whole network, D1 acts as a fully connected layer to integrate and combine the features of the previous flattened layer spread out. Finally, the four neurons in the out layer correspond to the categories to which the four leaf pixels belong. Using the activation function softmax, the results of the four neurons can be processed as probability values between 0 and 1, and the one with the largest probability value is determined as the category to which they belong.
2.2.5. Disease Classification Method: 1D-CNN
The network structure and parameter distribution of the 1D-CNN network, which was utilized to classify Anthrax, leaf blight, and early blight, are given in Table 2. The multidimensional fusion’s identification of the sick area’s hyperspectral information (1 × 204). By applying the cubic convolution pooling procedure, Atrous-CNN is used to extract the spectral curve features of the diseased area. The flattened layer is then used for tile expansion and linkage with the Dense layer. Parameter 3 in the output layer represents model confidence for disease prediction.
This study takes potato leaves as the research object. The overall flow chart is shown in Figure 6. Firstly, its hyperspectral image information is obtained as input features by hyperspectral cameras. A mirror extension method is designed for the attribute definition of edge labels of the data. Regarding extracting the hyperspectral information features, the proposed multidimensional fusion Atrous-CNN utilizes 1D-Atrous-CNN, and 2D-Atrous-CNN instead of the traditional convolution-pooling for feature extraction, thus substantially increasing the perceptual field of convolutional computation while ensuring no loss of data information. The paper then uses multidimensional fusion Atrous-CNN to classify the hyperspectral information of potato leaves, achieving the extraction of disease regions for the subsequent identification of disease species.
3. Analysis of Experimental Results
For the method part, we use the dilated convolution layer instead of the conventional convolution pooling operation to solve the data loss problem in information extraction. We compare the standard convolution with the dilated convolution, as shown in Figure 7. By comparing the experimental results, using the dilated convolution layer can improve the efficiency of data feature extraction and increase the convolutional computational field while maintaining information integrity.
To better validate the detection performance of the proposed algorithm, the traditional 3D-CNN and multidimensional fusion CNN and multidimensional fusion Atrous-CNN are compared in training experiments. The total data volume is 262,144 (512 × 512), with 209,715 data in training set accounting for 80% of the total data and 52,429 data in the validation set accounting for 20% of the total data. The hardware environment is an Intel Xeon E5-2650 v4 processor, NVIDIA Tesla V100-PCIE-16GB graphics card, and 256G RAM. Figure 8 shows the training process of hyperspectral data disease detection of potato leaves using three network models. The training results show that the loss function of the training process using the proposed multidimensional Atrous-CNN model decreases faster and converges better than the other two network models. Furthermore, the accuracy of prediction using the multidimensional Atrous-CNN model is also significantly higher than the other two network models. The training performance of this method outperformed the other two models in both 100 and 500 training sessions.
Table 3 shows the comparative training results for classifying potato leaf hyperspectral image data using three network models: 3D-CNN, multidimensional fusion CNN, and multidimensional fusion Atrous-CNN. According to the training results, the training time of the 3D-CNN model is longer than that of the multidimensional CNN structure at 100 training times, the prediction accuracy of feature extraction using null convolution is higher than that of the traditional convolution-pooling operation, and the accuracy of the proposed multidimensional fusion Atrous-CNN model is improved by 0.69% over the multidimensional fusion CNN model in the validation set at 100 training times. The accuracy of the proposed multidimensional fusion Atrous-CNN model is improved by 0.69% compared to the multidimensional fusion CNN model in the validation set at 100 training cycles. The training time is significantly reduced compared to the 3D-CNN network. At 500 training cycles, the accuracy of all three network models for potato plant leaf disease classification improved with increasing training times on the training set performance. Among them, the accuracy of the training set using the multidimensional fusion Atrous-CNN method was as high as 99.78% after the 500th training. The accuracy of this method on the training set was improved by 0.6% compared to the 500th-training 3D-CNN method and by 0.21% compared to the multidimensional fusion CNN. In the validation set, the accuracy of this method improves by 0.15% compared to the training 500 times 3D-CNN method and 0.45% compared to the multidimensional fusion CNN method.
Table 4 shows the results of disease detection using three network models for potato hyperspectral data. The potato hyperspectral data included four types of hyperspectral images: normal leaf pixels, diseased leaf pixels, background pixels, and whiteboard pixels. The results showed the highest prediction accuracy of the four types of pixels using the multidimensional fusion Atrous-CNN model. The recognition accuracy of all types of pixels reached more than 99.7%. Among them, in recognizing diseased leaf pixels, the accuracy was improved by 7.09% compared with the 3D-CNN method and 1.7% compared with the multidimensional fusion CNN method. The results proved that recognizing diseased leaves using multidimensional fusion Atrous-CNN has high effectiveness. Regarding the recognition accuracy of total pixels, the recognition accuracy using the multidimensional fusion Atrous-CNN model improved by 0.51% compared with the 3D-CNN method and by 0.94% compared with the multidimensional fusion CNN method.
In order to evaluate the performance of the model independently of the data set, this study uses the k-fold cross-validation method to divide the hyperspectral data of the diseased pixels five times, that is, the k-fold cross-validation species k = 5, and the number of training sets for each division is 50,508. The number of test sets is 12,627. The data division is shown in Figure 9. This study trained the five-times-divided data with 1DCNN, SVM, gradient_boosting_model, and multinomial naive Bayesian classification. The evaluation results are shown in Table 5 and Figure 10. The average accuracy of k-fold cross-validation of 1DCNN. Compared with the polynomial naive Bayesian model, the accuracy rate increased by 0.3401. Compared with the gradient_boosting_model model, the accuracy rate increased by 0.0276 and the average accuracy rate of SVN increased by 0.047.
The proposed multidimensional fusion Atrous-CNN fuses 3D convolution with 2D-AtrousCNN and 1D-AtrousCNN, which can not only reduce the training parameters of the network but also ensure the model’s effective spatial feature extraction of hyperspectral data compared with the training of the classification task by processing hyperspectral data features entirely through the use of 3D convolution operation. The ability to extract the spatial characteristics of hyperspectral data is also ensured. Compared with the traditional 1D and 2D convolution pooling for feature extraction, the null convolution operation provides no loss of data information. It significantly increases the perceptual field of convolution calculation, which ensures the feature extraction capability of the model. From the spectral data classification performance of leaves, the classification results of hyperspectral data of potato leaves using the proposed algorithm in this paper are more accurate than the other two deep learning models, and the multidimensional fusion Atrous-CNN with cavity convolution is used to train the loss function to fall faster and converge better during the training process.
1DCNN uses convolutional operation for feature extraction, which can more effectively identify the deeper feature information of hyperspectral data than traditional machine learning methods. In the model’s training, the difference between predicted and accurate labels is used to construct the loss function, the gradient descent method minimizes the loss function, and the optimal model is finally obtained through continuous training. From the five tests of k-weight cross-validation, we can see that the potato disease identification model trained by 1DCNN has better accuracy than the three machine learning algorithms. It indicates that the deep learning network using convolutional operations is more effective for feature extraction of hyperspectral data and performs better in the task of spectral classification than the traditional machine learning methods using polynomial and kernel techniques.
After k-fold cross-validation, this study re-divided the data set. The number of training sets was 47,352, including 11,782 pieces of spectral information of anthracnose leaves, 25,402 pieces of spectral information of leaf blight leaves, and 10,168 pieces of spectral information of early blight leaves information. The number of test sets is 15,784, including 3957 pieces of spectral information of anthracnose leaves, 8595 pieces of spectral information of leaf blight leaves, and 3232 pieces of spectral information of early blight leaves. Figure 11 shows the confusion matrix of the prediction results of the three diseases using the 1D-CNN network. The marked position is the number of disease categories correctly identified using the spectral information of the diseased leaves using the 1D-CNN. Table 6 shows the classification accuracy and recall rate of the three diseases calculated using the confusion matrix. The accuracy and recall rates of the three diseases in the training set using the 1D-CNN network are above 0.99. In the test set, the recognition accuracy rates of the three diseases were all above 0.98, among which the recognition accuracy rate of anthracnose reached 0.9987, and the recognition recall rates of the three disease test sets were all above 0.97, of which for the anthracnose and leaf blight, the recognition recall rate was above 0.99. In summary, using the 1D-CNN network and hyperspectral image technology to identify potato plant diseases is feasible.
Figure 12 shows the detection results of potato leaves with three anthracnose diseases, leaf blight, and early blight, using multidimensional fusion Atrous-CNN. The results show that this method can effectively extract the characteristic information of hyperspectral data and realize the accurate detection of potato leaf diseases.
In the two classification processes mentioned above, we classified four categories of healthy leaf pixels, diseased leaf pixels, background pixels, and whiteboard pixels, and the other for three diseases, respectively. The latter of these classifications is based on the classification task for the former. We labeled diseased leaf pixels from the first classification result as a new object of study. We performed a secondary classification using 1DCNN in combination with the hyperspectral number of diseased leaf pixels. Both varieties use the hyperspectral data of the leaves. However, because the four categories of objects on the leaves—healthy leaf pixels, diseased leaf pixels, background pixels, and whiteboard pixels—have specific regional connectivity and need to consider the influence of the spectral information of the pixels in their neighborhood, the structure of the network is enriched by considering the spatial, spectral data in the first classification. The second classification does not influence surrounding pixels, and the variety is predicted only based on the tremendous spectral details of the diseased pixels of the leaves.
4. Conclusions
In this paper, we propose a multidimensional fusion-based Atrous-CNN network structure and use the method to achieve disease detection and identification of potato hyperspectral images. The technique integrates hyperspectral data’s spatial and spectral information for analysis, effectively reducing the network’s computational cost compared with the traditional 3D-CNN. Since the network fuses multiple dimensional convolutions and uses null convolution to increase the perceptual field of the convolution kernel, it reduces the loss of hyperspectral data information. It makes the extracted spectral features more expressive, which in turn improves the performance of the classification of hyperspectral data. In this paper, Atrous-CNN is applied to potato leaf disease detection. The experimental results show that the proposed method has better classification results than the single 3D-CNN and the traditional method using convolution-pooling operation feature extraction and is an effective network structure for classification and feature extraction of hyperspectral data. Finally, combined with the use of the 1D-CNN network to classify and identify three types of diseases, anthracnose, leaf blight, and early blight leaves, the recognition accuracy of this structure is up to 0.9987. Therefore, this study can be a heuristic method for researchers to design crop disease detection and identification models and provide new solutions for the field.
The model proposed in this paper effectively solves the common problems in current agricultural disease image detection and has broad application prospects in precision agriculture and agricultural industry efficiency. Future work will expand this area of research to include more complex agrarian scenarios.
Conceptualization, W.G.; methodology, Z.X.; software, W.G.; validation, T.B. and W.G.; formal analysis, Z.X.; Resources, Z.X.; writing—original draft preparation, W.G.; writing— review and editing, W.G. and T.B.; supervision, Z.X. and W.G. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
Datasets can be accessed upon request to the corresponding author.
The authors would like to appreciate all reviewers for their insightful comments and constructive suggestions to polish this paper’s in high quality.
The authors declare that there are no conflict of interest regarding the publication of this paper.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 4. Leafpixel classification based on multidimensional fusion of Atrous-CNN.
Figure 11. Confusion matrix of the three disease identification results using 1D-CNN.
Multidimensional fusion of Atrous-CNN structure.
Layer | Type | Output Shape | Param | Connected to |
---|---|---|---|---|
input | InputLayer | (None, 7, 7, 204, 1) | 0 | |
Conv3_1 | Conv3D | (None, 7, 7, 204, 8) | 224 | input |
Conv3_2 | Conv3D | (None, 7, 7, 204, 8) | 1736 | Conv3_1 |
Conv3_3 | Conv3D | (None, 7, 7,204, 8) | 1736 | Conv3_2 |
Pool | MaxPooling3D | (None, 7, 7, 102, 8) | 0 | Conv3_3 |
reshape1 | Reshape | (None, 7, 7, 816) | 0 | Pool3 |
Conv2 | Conv2D | (None, 3, 3, 8) | 58, 760 | reshape1 |
reshape2 | Reshape | (None, 72, 1) | 0 | Conv2 |
Conv1 | Conv1D | (None, 68, 16) | 64 | reshape2 |
flatten | Flatten | (None, 1088) | 0 | Conv1 |
Dropout | Dropout | (None, 1088) | 0 | flatten |
D1 | Dense | (None, 50) | 54,450 | Dropout |
out | Dense | (None, 4) | 204 | D1 |
1D-CNN network structure.
Layer | Type | Output Shape | Param | Connected to |
---|---|---|---|---|
input | InputLayer | (None, 1, 204, 1) | 0 | |
Conv1_1 | Conv1D | (None, 204, 32) | 224 | input |
Pool1 | MaxPooling1D | (None, 51, 32) | 0 | Conv1_1 |
Conv2_1 | Conv1D | (None, 51, 64) | 12,352 | Pool1 |
Pool2 | MaxPooling1D | (None, 26, 64) | 0 | Conv2_1 |
Conv3_1 | Conv1D | (None, 26, 128) | 49,280 | Pool2 |
Pool3 | MaxPooling1D | (None, 13, 128) | 0 | Conv3_1 |
flatten | Flatten | (None, 1664) | 0 | Pool3 |
D1 | Dense | (None, 128) | 213,120 | flatten |
out | Dense | (None, 3) | 387 | D1 |
Training results of three network models.
Datasets | Assessment Metrics | Models | ||
---|---|---|---|---|
3D-CNN | Multidimensional Fusion CNN | Multidimensional Fusion Atrous-CNN | ||
Train-100 | ||||
Time-100 | 2:15:50 | 2:02:11 | 2:07:57 | |
Train | Loss-100 | 0.0259 | 0.0201 | 0.0141 |
Precision-100 | 98.92% | 99.16% | 99.41% | |
Val | Loss-100 | 0.0231 | 0.0336 | 0.0233 |
Precision-100 | 99.07% | 98.44% | 99.13% | |
Train-500 | ||||
Time-500 | 11:42:40 | 10:52:13 | 10:59:43 | |
Train | Loss-500 | 0.0195 | 0.0106 | 0.0054 |
Precision-500 | 99.18% | 99.57% | 99.78% | |
Val | Loss-500 | 0.0214 | 0.0254 | 0.0226 |
Precision-500 | 99.16% | 98.86% | 99.31% |
Test results of three network models.
Category Labels | Models | |||||
---|---|---|---|---|---|---|
3D-CNN | Multidimensional Fusion CNN | Multidimensional Fusion Atrous-CNN | ||||
Correct Pixels | Precision | Correct Pixels | Precision | Correct Pixels | Precision | |
Healthy leaf pixels (16,970) | 16,894 | 99.55% | 16,499 | 97.22% | 16,934 | 99.79% |
Diseased leaves pixels (3173) | 2940 | 92.66% | 3111 | 98.05% | 3165 | 99.75% |
Background pixels (29,773) | 29,756 | 99.94% | 29,752 | 99.93% | 29,758 | 99.95% |
Whiteboard pixels (2513) | 2510 | 99.88% | 2509 | 99.84% | 2511 | 99.92% |
Total (52,429) | 52,100 | 99.37% | 51871 | 98.94% | 52,368 | 99.88% |
Comparison of evaluation results.
Dataset | K-Fold cross Validation | 1DCNN | Multinomial Naive Bayes Classifier | GBDT | SVM |
---|---|---|---|---|---|
Train set | The first time | 0.9979 | 0.6582 | 0.9707 | 0.9508 |
The second time | 0.9987 | 0.6583 | 0.9706 | 0.9509 | |
The third time | 0.9989 | 0.6592 | 0.9708 | 0.9517 | |
The fourth time | 0.9978 | 0.6581 | 0.9706 | 0.9522 | |
The fifth time | 0.9982 | 0.6572 | 0.9706 | 0.9510 | |
Average | 0.9983 | 0.6582 | 0.9707 | 0.9513 | |
Test set | The first time | 0.9967 | 0.6577 | 0.9682 | 0.9527 |
The second time | 0.9980 | 0.6516 | 0.9735 | 0.9528 | |
The third time | 0.9990 | 0.6664 | 0.9718 | 0.9482 | |
The fourth time | 0.9976 | 0.6526 | 0.9711 | 0.9470 | |
The fifth time | 0.9997 | 0.6627 | 0.9688 | 0.9554 | |
Average | 0.9982 | 0.6582 | 0.9707 | 0.9512 |
Accuracy and recall of prediction results for different disease categories.
Datasets | Accuracy and Recall | Disease Category | ||
---|---|---|---|---|
Anthrax | Blight | Early Blight | ||
Train | Accuracy | 1 | 0.9969 | 0.9979 |
Recall | 1 | 0.9992 | 0.9923 | |
Test | Accuracy | 0.9987 | 0.9895 | 0.9842 |
Recall | 0.9997 | 0.9942 | 0.971 |
References
1. Zhang, H.; Fen, X.; Yu, W.; Hu, H.H.; Dai, X.F. Progress of potato staple food research and industry development in China. J. Integr. Agric.; 2017; 16, pp. 2924-2932. [DOI: https://dx.doi.org/10.1016/S2095-3119(17)61736-2]
2. Bruckner, M.; Wood, R.; Moran, D.; Kuschnig, N.; Wieland, H.; Maus, V.; Börner, J. FABIO—The construction of the food and agriculture biomass input–output model. Environ. Sci. Technol.; 2019; 53, pp. 11302-11312. [DOI: https://dx.doi.org/10.1021/acs.est.9b03554] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31479245]
3. Charkowski, A.; Sharma, K.; Parker, M.L.; Secor, G.A.; Elphinstone, J. Bacterial diseases of potato. The Potato Crop: Its Agricultural, Nutritional and Social Contribution to Humankind; Springer Nature: Berlin/Heidelberg, Germany, 2020; pp. 351-388.
4. Waleron, M.; Misztak, A.; Jońca, J.; Waleron, K. First report of Pectobacterium polaris causing soft rot of potato in Poland. Plant Dis.; 2019; 103, 144. [DOI: https://dx.doi.org/10.1094/PDIS-05-18-0861-PDN]
5. Bergsma-Vlami, M.; Saddler, G.; Hélias, V.; Tsror, L.; Yedida, I.; Pirhonen, M.; Degefu, Y.; Tuomisto, J.; Lojkowska, E.; Li, S. et al. Assessment of Dickeya and Pectobacterium spp. on Vegetables and Ornamentals (Soft Rot); Zenodo: Honolulu, HI, USA, 2020.
6. Hadizadeh, I.; Peivastegan, B.; Hannukkala, A.; Van der Wolf, J.; Nissinen, R.; Pirhonen, M. Biological control of potato soft rot caused by Dickeya solani and the survival of bacterial antagonists under cold storage conditions. Plant Pathol.; 2019; 68, pp. 297-311. [DOI: https://dx.doi.org/10.1111/ppa.12956]
7. Stark, J.C.; Thornton, M.; Nolte, P. Potato Production Systems; Springer Nature: Berlin/Heidelberg, Germany, 2020.
8. Shukla, A.; Ratan, V. Management of Early Blight of Potato by Using Different Bioagents as Tuber Dressing and its Effect on Germination and Growth. Int. J. Curr. Microbiol. Appl. Sci.; 2019; 8, pp. 1965-1970. [DOI: https://dx.doi.org/10.20546/ijcmas.2019.806.233]
9. Landschoot, S.; Vandecasteele, M.; De Baets, B.; Höfte, M.; Audenaert, K.; Haesaert, G. Identification of A. arborescens, A. grandis, and A. protenta as new members of the European Alternaria population on potato. Fungal Biol.; 2017; 121, pp. 172-188. [DOI: https://dx.doi.org/10.1016/j.funbio.2016.11.005]
10. Abuley, I.K.; Hansen, J.G. An epidemiological analysis of the dilemma of plant age and late blight (Phytophthora infestans) susceptibility in potatoes. Eur. J. Plant Pathol.; 2021; 161, pp. 645-663. [DOI: https://dx.doi.org/10.1007/s10658-021-02350-4]
11. Degefu, Y. Co-occurrence of latent Dickeya and Pectobacterium species in potato seed tuber samples from northern Finland: Co-colonization of latent Dickeya and Pectobacterium species in potato seed lots. Agric. Food Sci.; 2021; 30, pp. 1-7. [DOI: https://dx.doi.org/10.23986/afsci.101446]
12. Meno, L.; Escuredo, O.; Rodríguez-Flores, M.S.; Seijo, M.C. Looking for a sustainable potato crop. Field assessment of early blight management. Agric. For. Meteorol.; 2021; 308, 108617. [DOI: https://dx.doi.org/10.1016/j.agrformet.2021.108617]
13. Peters, R.; Sturz, A.; Carter, M.; Sanderson, J. Influence of crop rotation and conservation tillage practices on the severity of soil-borne potato diseases in temperate humid agriculture. Can. J. Soil Sci.; 2004; 84, pp. 397-402. [DOI: https://dx.doi.org/10.4141/S03-060]
14. Adolf, B.; Andrade-Piedra, J.; Bittara Molina, F.; Przetakiewicz, J.; Hausladen, H.; Kromann, P.; Lees, A.; Lindqvist-Kreuze, H.; Perez, W.; Secor, G.A. Fungal, oomycete, and plasmodiophorid diseases of potato. The Potato Crop: Its Agricultural, Nutritional and Social Contribution to Humankind; Springer Nature: Berlin/Heidelberg, Germany, 2020; pp. 307-350.
15. Kolychikhina, M.; Beloshapkina, O.; Phiri, C. Change in potato productivity under the impact of viral diseases. IOP Conf. Ser. Earth Environ. Sci.; 2021; 663, 012035. [DOI: https://dx.doi.org/10.1088/1755-1315/663/1/012035]
16. Garhwal, A.S.; Pullanagari, R.R.; Li, M.; Reis, M.M.; Archer, R. Hyperspectral imaging for identification of Zebra Chip disease in potatoes. Biosyst. Eng.; 2020; 197, pp. 306-317. [DOI: https://dx.doi.org/10.1016/j.biosystemseng.2020.07.005]
17. Iftikhar, S.; Shahid, A.A.; Halim, S.A.; Wolters, P.J.; Vleeshouwers, V.G.; Khan, A.; Al-Harrasi, A.; Ahmad, S. Discovering novel Alternaria solani succinate dehydrogenase inhibitors by in silico modeling and virtual screening strategies to combat early blight. Front. Chem.; 2017; 5, 100. [DOI: https://dx.doi.org/10.3389/fchem.2017.00100] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29204422]
18. Chen, L.; Yin, X. Image recognition of typical potato diseases and insect pests using deep learning. Fresenius Environ. Bull.; 2021; 30, pp. 9956-9965.
19. Gold, K.M.; Townsend, P.A.; Herrmann, I.; Gevens, A.J. Investigating potato late blight physiological differences across potato cultivars with spectroscopy and machine learning. Plant Sci.; 2020; 295, 110316. [DOI: https://dx.doi.org/10.1016/j.plantsci.2019.110316]
20. Zheng, C.; Abd-Elrahman, A.; Whitaker, V. Remote sensing and machine learning in crop phenotyping and management, with an emphasis on applications in strawberry farming. Remote Sens.; 2021; 13, 531. [DOI: https://dx.doi.org/10.3390/rs13030531]
21. Singh, A.; Kaur, H. Potato plant leaves disease detection and classification using machine learning methodologies. IOP Conf. Ser. Mater. Sci. Eng.; 2021; 1022, 012121. [DOI: https://dx.doi.org/10.1088/1757-899X/1022/1/012121]
22. Iqbal, M.A.; Talukder, K.H. Detection of potato disease using image segmentation and machine learning. Proceedings of the 2020 International Conference on Wireless Communications Signal Processing and Networking (WiSPNET); Chennai, India, 4–6 August 2020; pp. 43-47.
23. Teke, M.; Deveci, H.S.; Haliloğlu, O.; Gürbüz, S.Z.; Sakarya, U. A short survey of hyperspectral remote sensing applications in agriculture. Proceedings of the 2013 6th International Conference on Recent Advances in Space Technologies (RAST); Istanbul, Turkey, 12–14 June 2013; pp. 171-176.
24. Agilandeeswari, L.; Prabukumar, M.; Radhesyam, V.; Phaneendra, K.L.B.; Farhan, A. Crop classification for agricultural applications in hyperspectral remote sensing images. Appl. Sci.; 2022; 12, 1670. [DOI: https://dx.doi.org/10.3390/app12031670]
25. Sulaiman, N.; Che’Ya, N.N.; Mohd Roslim, M.H.; Juraimi, A.S.; Mohd Noor, N.; Fazlil Ilahi, W.F. The application of Hyperspectral Remote Sensing Imagery (HRSI) for weed detection analysis in rice fields: A review. Appl. Sci.; 2022; 12, 2570. [DOI: https://dx.doi.org/10.3390/app12052570]
26. Zhang, F.; Li, X.; Qiu, S.; Feng, J.; Wang, D.; Wu, X.; Cheng, Q. Hyperspectral imaging combined with convolutional neural network for outdoor detection of potato diseases. Proceedings of the 2021 6th International Symposium on Computer and Information Processing Technology (ISCIPT); Changsha, China, 11–13 June 2021; pp. 846-850.
27. Martinez-Nolasco, C.; Padilla-Medina, J.A.; Nolasco, J.J.M.; Guevara-Gonzalez, R.G.; Barranco-Gutiérrez, A.I.; Diaz-Carmona, J.J. Non-Invasive Monitoring of the Thermal and Morphometric Characteristics of Lettuce Grown in an Aeroponic System through Multispectral Image System. Appl. Sci.; 2022; 12, 6540. [DOI: https://dx.doi.org/10.3390/app12136540]
28. Leng, J.; Li, T.; Bai, G.; Dong, Q.; Dong, H. Cube-CNN-SVM: A novel hyperspectral image classification method. Proceedings of the 2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI); San Jose, CA, USA, 6–8 November 2016; pp. 1027-1034.
29. Li, Y.; Zhang, H.; Shen, Q. Spectral–spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens.; 2017; 9, 67. [DOI: https://dx.doi.org/10.3390/rs9010067]
30. Yang, J.; Zhao, Y.Q.; Chan, J.C.W.; Xiao, L. A multi-scale wavelet 3D-CNN for hyperspectral image super-resolution. Remote Sens.; 2019; 11, 1557. [DOI: https://dx.doi.org/10.3390/rs11131557]
31. Firat, H.; Hanbay, D. Classification of hyperspectral images using 3d cnn based resnet50. Proceedings of the 2021 29th Signal Processing and Communications Applications Conference (SIU); Istanbul, Turkey, 9–11 June 2021; pp. 1-4.
32. Sabokrou, M.; Fayyaz, M.; Fathy, M.; Klette, R. Deep-cascade: Cascading 3d deep neural networks for fast anomaly detection and localization in crowded scenes. IEEE Trans. Image Process.; 2017; 26, pp. 1992-2004. [DOI: https://dx.doi.org/10.1109/TIP.2017.2670780]
33. Yu, C.; Han, R.; Song, M.; Liu, C.; Chang, C.I. A simplified 2D-3D CNN architecture for hyperspectral image classification based on spatial–spectral fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.; 2020; 13, pp. 2485-2501. [DOI: https://dx.doi.org/10.1109/JSTARS.2020.2983224]
34. Bechar, A.; Vigneault, C. Agricultural robots for field operations: Concepts and components. Biosyst. Eng.; 2016; 149, pp. 94-111. [DOI: https://dx.doi.org/10.1016/j.biosystemseng.2016.06.014]
35. Polder, G.; Blok, P.M.; Villiers, H.; Wolf, J.; Kamp, J. Potato Virus Y Detection in Seed Potatoes Using Deep Learning on Hyperspectral Images. Front. Plant Sci.; 2019; 10, 209. [DOI: https://dx.doi.org/10.3389/fpls.2019.00209] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30881366]
36. Thomas, S.; Kuska, M.T.; Bohnenkamp, D.; Brugger, A.; Alisaac, E.; Wahabzada, M.; Behmann, J.; Mahlein, A.K. Benefits of hyperspectral imaging for plant disease detection and plant protection: A technical perspective. J. Plant Dis. Prot. New Ser.; 2018; 125, pp. 5-20. [DOI: https://dx.doi.org/10.1007/s41348-017-0124-6]
37. Atherton, D.; Watson, D.G.; Zhang, M.; Qin, Z.; Liu, X. Hyperspectral Spectroscopy for Detection of Early Blight (Alternaria solani) Disease in Potato (Solanum tuberosum) Plants at Two Different Growth Stages. Proceedings of the 2015 ASABE Annual International Meeting; New Orleans, LA, USA, 26–29 July 2015.
38. Atherton, D.; Choudhary, R.; Watson, D. Hyperspectral Remote Sensing for Advanced Detection of Early Blight (Alternaria solani) Disease in Potato (Solanum tuberosum) Plants. Proceedings of the 2017 ASABE Annual International Meeting Spokane; Washington, DC, USA, 16–19 July 2017.
39. Ray, S.S.; Jain, N.; Arora, R.K.; Chavan, S.; Panigrahy, S. Utility of Hyperspectral Data for Potato Late Blight Disease Detection. J. Indian Soc. Remote Sens.; 2011; 39, pp. 161-169. [DOI: https://dx.doi.org/10.1007/s12524-011-0094-2]
40. Hu, Y.H.; Ping, X.W.; Xu, M.Z.; Shan, W.X.; He, Y. Detection of Late Blight Disease on Potato Leaves Using Hyperspectral Imaging Technique. Spectrosc. Spec. Anal.; 2016; 36, pp. 515-519.
41. Griffel, L.M.; Delparte, D.; Edwards, J. Using Support Vector Machines classification to differentiate spectral signatures of potato plants infected with Potato Virus Y. Comput. Electron. Agric.; 2018; 153, pp. 318-324. [DOI: https://dx.doi.org/10.1016/j.compag.2018.08.027]
42. Kang, F.; Li, J.; Wang, C.; Wang, F. A Lightweight Neural Network-Based Method for Identifying Early-Blight and Late-Blight Leaves of Potato. Appl. Sci.; 2023; 13, 1487. [DOI: https://dx.doi.org/10.3390/app13031487]
43. Shi, Y.; Han, L.; Kleerekoper, A.; Chang, S.; Hu, T. A Novel CropdocNet for Automated Potato Late Blight Disease Detection from the Unmanned Aerial Vehicle-based Hyperspectral Imagery. arXiv; 2021; arXiv: 2107.13277[DOI: https://dx.doi.org/10.3390/rs14020396]
44. Gao, J.; Westergaard, J.C.; Sundmark, E.; Bagge, M.; Alexandersson, E. Automatic late blight lesion recognition and severity quantification based on field imagery of diverse potato genotypes by deep learning. Knowl.-Based Syst.; 2021; 214, 106723. [DOI: https://dx.doi.org/10.1016/j.knosys.2020.106723]
45. Qi, C.; Sandroni, M.; Westergaard, J.C.; Sundmark, E.; Bagge, M.; Alexandersson, E.; Gao, J. In-field early disease recognition of potato late blight based on deep learning and proximal hyperspectral imaging. arXiv; 2021; arXiv: 2111.12155[DOI: https://dx.doi.org/10.2139/ssrn.4037959]
46. Chen, J.; Deng, X.; Wen, Y.; Chen, W.; Zeb, A.; Zhang, D. Weakly-supervised learning method for the recognition of potato leaf diseases. Artificial Intelligence Review; Springer Nature: Berlin/Heidelberg, Germany, 2022; pp. 1-18.
47. Chen, Y. Convolutional Neural Network for Sentence Classification. Master’s Thesis; University of Waterloo: Waterloo, ON, Canada, 2015.
48. Huang, Y.; Wang, Q.; Jia, W.; Lu, Y.; Li, Y.; He, X. See more than once: Kernel-sharing atrous convolution for semantic segmentation. Neurocomputing; 2021; 443, pp. 26-34. [DOI: https://dx.doi.org/10.1016/j.neucom.2021.02.091]
49. Chen, L.C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv; 2017; arXiv: 1706.05587
50. Qiao, S.; Chen, L.C.; Yuille, A. Detectors: Detecting objects with recursive feature pyramid and switchable atrous convolution. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; Nashville, TN, USA, 20–25 June 2021; pp. 10213-10224.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
As one of the world’s most crucial crops, the potato is an essential source of nutrition for human activities. However, several diseases pose a severe threat to the yield and quality of potatoes. Timely and accurate detection and identification of potato diseases are of great importance. Hyperspectral imaging has emerged as an essential tool that provides rich spectral and spatial distribution information and has been widely used in potato disease detection and identification. Nevertheless, the accuracy of prediction is often low when processing hyperspectral data using a one-dimensional convolutional neural network (1D-CNN). Additionally, conventional three-dimensional convolutional neural networks (3D-CNN) often require high hardware consumption while processing hyperspectral data. In this paper, we propose an Atrous-CNN network structure that fuses multiple dimensions to address these problems. The proposed structure combines the spectral information extracted by 1D-CNN, the spatial information extracted by 2D-CNN, and the spatial spectrum information extracted by 3D-CNN. To enhance the perceptual field of the convolution kernel and reduce the loss of hyperspectral data, null convolution is utilized in 1D-CNN and 2D-CNN to extract data features. We tested the proposed structure on three real-world potato diseases and achieved recognition accuracy of up to 0.9987. The algorithm presented in this paper effectively extracts hyperspectral data feature information using three different dimensional CNNs, leading to higher recognition accuracy and reduced hardware consumption. Therefore, it is feasible to use the 1D-CNN network and hyperspectral image technology for potato plant disease identification.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer