This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Tea farming is an ever-growing industrial sector with increasing production demand as tea is the second most consumed beverage worldwide, next to the water. India is the second largest producer of tea and produced around 1250 million kg in the year 2020 [1]. Tea foliar diseases are of greater concern as they directly affect the harvest, and fungal diseases in particular have a huge impact on the quality and quantity of the produce. In specific, grey blight caused by Pestalotiopsis theae is one of the most highly reported diseases from all major tea-growing countries of the world [2]. Mechanical damage to plants incurred by the use of farming equipment initiates infection and disease development. The fungus attacks the maintenance leaves of the tea plant, which ensure nourishment to the tender foliage, indirectly resulting in huge crop loss [3]. The grey blight disease symptoms appear in the middle part of the leaf as brown concentric spots, which later turn grey with brown margins and spread throughout the whole leaf. Detection and diagnosis of symptoms are crucial to controlling the spread of diseases towards sustainable production. The tea cultivation regions are usually large and include mountainous terrains that are difficult to investigate on a routine basis. Concerning tea plantations, conventional methods of disease detection have become ineffective as they rely on intensive manpower and highly specific instruments [4]. Moreover, incorrect diagnosis of the disease leads to inappropriate use of fungicides adding to the production costs and environmental pollution. The above challenges in diagnosing the grey blight infection on tea leaves using conventional techniques are motivated to develop an automatic diagnosis technique.
Computer vision and machine learning techniques have been employed recently in a variety of crops to accurately diagnose diseases and pest attacks based on characteristic symptoms [5–7]. This approach relies on the extraction of features from the leaf images and their identification and classification using an artificial neural network (ANN) [8]. Deep learning, an advanced machine learning technique, which uses deep convolutional neural networks (DCNN) for crop disease identification, is gaining increased application due to its automatic feature extraction ability, accuracy, and robustness in detection [9–11]. For tea disease detection and classification, a few machine learning-based approaches have been employed with considerable performance [5, 12–15].
The major contributions of this research are as follows:
(i) The grey blight disease detection model for tea leaves was proposed using an improved deep convolutional neural network
(ii) Novel linear bottleneck layers and inverted residual connections were used to design the improved deep convolutional neural network
(iii) The tea grey blight disease dataset was created using 1320 original leaf images of tea crops, including grey blight diseased and nongrey blight diseased leaves
(iv) The dataset size was extended to 5280 images of tea leaves using standard image augmentation techniques such as principal component analysis (PCA), random rotations, random shifts, random flips, resizing, and rescaling
(v) Hyperparameters of the proposed deep convolutional neural network were optimized using the Bayes grid search technique.
(vi) The proposed DCNN was trained on 5016 images of healthy, grey blight infected, and other diseased tea leaves
(vii) performance of the proposed DCNN was estimated on the test data of 264 images of tea leaves using standard performance metrics such as accuracy, precision, recall, F measure, and misclassification rate
(viii) The performance of the proposed DCNN was superior to the recent plant leaf disease detection models and transfer learning techniques for tea grey blight disease detection
The forthcoming sections of the research article are organized as follows. Section 2 discusses the existing state-of-the-art techniques for leaf disease detection and highlights the significance of the research. Section 3 elaborates on the tea grey blight disease detection dataset preparation and proposed DCNN model development. In Section 4, the performance of the proposed DCNN model on tea grey blight disease detection is reviewed and compared with the performance of the advanced plant leaf disease detection models and existing transfer learning techniques. Finally, concluding remarks and future directions of the research are discussed in Section 5.
2. Related Works
Grey blight, a fungal disease caused by Pestalotiopsis-like species, is a widespread disease affecting tea crops in many tea-growing countries, including India, resulting in huge losses in tea production. The disease typically affects tea leaves from June to September in India. Initially, small brownish spots on the upper surface of the leaves enlarge slowly [16]. These spots may be of various sizes and shapes with an irregular outline. Later, the spot looks dark brown with a greyish appearance at the center and is surrounded by narrow concentric zonation at the leaf margin. The host range of the grey blight pathogen includes guava, strawberry, oil palm, kiwi fruit, mango, pine, and avocado. The authors in [17] found that this disease is caused by Pestalotiopsis, Neopestalotiopsis, and Pseudopestalotiopsis using multilocus DNA sequence-based identification.
This disease has caused around 17–46% crop loss in India and 10–20% yield loss in Japan [3, 18]. The grey blight disease has reduced the tea quality and production by up to 50% in the major tea-growing regions of China and Taiwan [19, 20]. This disease is extensively spreading in the tea gardens of North Bengal of India and other countries such as Korea, China, Kenya, Japan, and Sri Lanka. Advanced artificial intelligence techniques such as machine learning and deep learning performed a significant role in the disease detection of various plant leaves. The DCNNs are the most successful plant disease detection techniques using leaf images [21, 26]. Table 1 compares the numerous detection approaches using machine learning techniques proposed by different articles.
Table 1
Comparison of existing tea leaf disease detection techniques.
Article | Year | Diseases | Number of | Methodology | Accuracy (%) | |
Classes | Images | |||||
24 | 2018 | Brown blight, blister blight, and algal leaf spot | 4 | 1223 | Custom DCNN | 81.08 |
25 | 2019 | Anthracnose | 2 | 100 | Iterative self-organizing data analysis technique (ISODATA) | 98 |
26 | 2019 | Leaf blight, bud blight, and red scab | 4 | 36 | Custom DCNN | 92.5 |
27 | 2020 | Brown blight, blister blight, and leaf spot | 4 | 1822 | Faster region-based convolutional neural network (Faster RCNN) | 89.4 |
28 | 2021 | Algal leaf spot, grey blight, white spot, brown blight, red scab, bud blight, and grey blight | 8 | 860 | Custom DCNN | 94.45 |
29 | 2021 | Red rust, red spider, thrips, helopeltis, and sunlight scorching | 6 | 1000 | Principal component analysis (PCA) and multiclass support vector machine (SVM) | 83 |
30 | 2021 | Leaf blight | 2 | 970 | Retinex algorithm and faster RCNN | 84.45 |
31 | 2021 | Brown blight, blister blight, and leaf spot | 4 | 4295 | Cascade RCNN (CRCNN) | 76.6 |
32 | 2022 | Leaf spot, rhizome rot, powdery mildew, and leaf blotch | 5 | 630 | Hybrid filter and support vector machine | 92.84 |
33 | 2022 | Red leaf spot, algal leaf spot, bird’s eyespot, grey blight, white spot, anthracnose, and brown blight | 8 | 885 | Improved retina-net | 93.83 |
34 | 2022 | Blister blight | 2 | 60000 | Deep hashing with integrated autoencoders (DHIA) | 98.5 |
35 | 2022 | White scab, leaf blight, red scab, and sooty mould | 5 | 634 | Custom DCNN and generative adversarial network (GAN) | 93.24 |
The extensive literature survey shows the significance of DCNN models in tea leaf disease detection. Also, the literature survey identified the following challenges faced by the existing techniques for tea grey blight disease detection. The first challenge is about the visual symptoms. The brown blight, white blight, and bud blight visual symptoms are similar to the grey-blight disease in tea leaves [13, 27–29] that leads to the disease detection models misclassifying the diseases. Second, there is a minimum number of research studies considered to diagnose the grey blight disease in tea crops [30, 31]. However, grey blight is one of the most common yield-restricting diseases of tea crops in India. Finally, the existing techniques were not achieved significant performance in grey blight disease detection. At maximum, the tea leaf disease detection model achieved 94.45% of classification accuracy [30]. The above studies show the significance of proposing a novel approach to the diagnosis of grey blight disease with better performance than the existing works. Also, the approach should understand the difference between grey blight disease, brown blight, white blight, and bud blight disease symptoms. The development and training process of the proposed grey blight disease detection model is discussed in Section 3.
3. Materials and Methods
This section describes the proposed DCNN model and dataset for grey blight disease detection. Section 3.1 explains the grey blight disease dataset collection and preparation. The construction and training process of the proposed DCNN model is explained in Section 3.2.
3.1. Data Collection and Preparation
In the present study, tea gardens located in North Bengal, India, were visited during 2020-2021 to examine the disease pattern of grey blight. Almost all 27 tea gardens visited were infected with grey blight disease with different symptoms. The image of the symptoms was taken using a Canon digital single-lens reflex camera of 500 pixels. There are 1320 images of healthy, grey blight diseased and other diseased leaves captured for preparing the tea grey blight disease dataset. Figure 1 shows the sample leaf images of the captured data.
[figure(s) omitted; refer to PDF]
The data augmentation techniques, such as the principal component analysis (PCA) color, random rotations, random shifts, random flips, resizing, and rescaling, were used to create 3960 images in the tea grey blight disease dataset. The PCA is an unsupervised machine learning technique that was generally used for clustering data. Recently, PCA techniques have been used as an augmentation technique in various image classification applications [32]. The PCA color-augmented images of the sample images from the tea grey blight disease dataset are shown in Figure 2.
[figure(s) omitted; refer to PDF]
The augmented images were used to increase the number of data and balance the data count in each class of the tea grey blight dataset. The augmented data are added only to the training and validation datasets. The tea grey blight disease dataset was split into training, validating, and testing datasets. The dataset is split up for training, validation, and testing processes as illustrated in Table 2.
Table 2
Data split up for training, validation, and testing.
Dataset | Number of images |
Training dataset | 4752 |
Validation dataset | 264 |
Test dataset | 264 |
Total images | 5280 |
The training and validation datasets were used for the training and validation process of the proposed DCNN and standard transfer learning techniques. Each class in the training data consists of 1584 tea leaves images. The test data were used for testing the performance of the proposed DCNN and existing transfer learning techniques. The test dataset contains only originally collected data. The subsequent subsection explains the layered architecture and training process of the proposed DCNN model for the tea grey blight detection task.
3.2. Classification Model Design and Training
The proposed DCNN model consists of a sequence of 13 layers. The proposed DCNN model design was inspired by the architectures of MobileNet and VGG19 Net [33, 34]. It uses the inverted residual connections and bottleneck layers from MobileNet and convolutional layer pairs and the downsampling process from VGG19Net. Figure 3 shows the layered architecture of the proposed DCNN model.
[figure(s) omitted; refer to PDF]
The first layer in the proposed DCNN model was named the input layer; it performs resizing the input image dimensions to 224 ∗ 224 ∗ 3 pixels. The resized image was forwarded as an input to the pair of two-dimensional convolutional (Conv2D) layers with a filter size of 128, kernel size of 3 ∗ 3 with a stride value of 1 ∗ 1, and a ReLU activation function. The max-pooling layer was introduced as a fourth layer of the proposed DCNN model to down the sampling size of the convolutional layer output. The max-pooling layer uses a 2 ∗ 2-sized kernel with 1 ∗ 1-sized strides. The downsampled data were forwarded to the sequence of three bottleneck blocks. Figure 4 illustrates the internal layers of the bottleneck block.
[figure(s) omitted; refer to PDF]
Each bottleneck block consists of four internal layers, such as convolutional (Conv) layers, depthwise convolutional (Dwise Conv) layers, linear convolutional layers, and adder (Add) layers. The Conv layer performs the convolutional operation for extracting the feature information from the output data of the previous layer. The Dwise Conv layers perform the Conv operation on the output of the conv layer with a single filter for all the channels. The Dwise Conv layers require very less computational process compared with the traditional Conv layers. The linear Conv layer was introduced after the Dwise Conv in the bottleneck layer of the proposed DCNN model. The linear Conv layer implements the convolution operations using linear activation functions. The Add layer combines the output data of the linear Conv layer and the input data of the current bottleneck layer in the model. All the Conv layers in the bottleneck block used the kernel size of 3 ∗ 3, the stride value of 1 ∗ 1, and the activation function of ReLU.
In addition, the output data of bottleneck block 3 was forwarded to the pair of Conv layers. The Conv layer pair performs a convolutional operation with a filter size of 64, kernel size of 3 ∗ 3, and stride value of 1 ∗ 1. The max-pooling layer was introduced after the pair of Conv layers in the network to perform downsampling. The max-pooling layer uses a filter size of 2 ∗ 2 with a stride value of 1 ∗ 1. The downsampled data were forwarded to the sequence of three fully connected layers (Dense) with filter sizes of 1024, 512, and 3, respectively. The final dense layer classifies the input data into three classes using the softmax activation function. The Bayes grid search technique was used to optimize the parameter values for the proposed DCNN model. The Bayes grid search technique identified the optimized batch size as 32, the loss function as categorical cross-entropy, the optimizer as Adam, and the learning rate as 0.001 for the proposed DCNN model. The proposed DCNN was trained on the grey blight dataset using optimized hyperparameters up to 1000 epochs. The training progress of the proposed DCNN model is shown in Figure 5.
[figure(s) omitted; refer to PDF]
The proposed DCNN model achieved a training accuracy of 99.53% and a training loss of 0.042 on the final training epoch. Also, the model performance was validated using a validation dataset. Figure 6 illustrates the validation epoch-wise validation performance of the proposed DCNN model on the grey-blight disease dataset.
[figure(s) omitted; refer to PDF]
The proposed DCNN model achieved the validation accuracy and loss on the final training epoch of 99.27% and 0.096, respectively. The trained DCNN model architecture and weights were stored for the testing and deployment processes. Section 4 discusses the test performance of the proposed DCNN model on the test dataset. Also, it compares the performance of the proposed DCNN model with recent transfer learning techniques.
4. Results and Discussion
This section compares the performance of the proposed DCNN model with recent transfer learning techniques for grey blight disease detection. The recent transfer learning techniques are AlexNet, DenseNet201, InceptionV3Net, MobileNetV2, NASNet Large, ResNet152, VGG19Net, and XceptionNet. There 264 tea leaf images were used for testing the performance of the proposed and existing models. Figure 7 shows the code-generated confusion matrix of the proposed DCNN on the test dataset.
[figure(s) omitted; refer to PDF]
The test outcome of the proposed DCNN as True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) is shown in Table 3. The code generated a confusion matrix, and scores of the existing transfer learning techniques are included in the Supplementary Materials (available here).
Table 3
Confusion matrix score of DCNN.
Class | True positive | True negative | False positive | False negative |
Healthy | 87 | 173 | 3 | 1 |
Grey blight | 87 | 176 | 0 | 1 |
Other disease | 86 | 175 | 1 | 2 |
Classification accuracy, precision, recall, F measure, misclassification rate, and receiver operating characteristic (ROC) curve were used as metrics for estimating the performance of the grey blight leaf disease detection models in tea leaves. The TP, TN, FP, and FN were used to calculate the performance metric scores. Equations (1–5) show the standard formulas to calculate the accuracy, precision, recall, F measure, and misclassification rate [35]:
Table 4 illustrates the class-wise performance metrics score and weighted average performance score of the proposed DCNN model on test data.
Table 4
Class-wise performance of DCNN.
Class | Accuracy | Precision | Recall | F measure | Misclassification rate |
Healthy | 98.48 | 96.67 | 98.86 | 97.75 | 1.52 |
Grey blight | 99.62 | 100 | 98.86 | 99.43 | 0.38 |
Other diseases | 98.86 | 98.85 | 97.73 | 98.29 | 1.14 |
Average | 98.99 | 98.51 | 98.48 | 98.49 | 1.01 |
The average performance metric scores of the existing transfer learning techniques are given in the supplementary materials. The average performance metric scores of the proposed model were compared with the transfer learning techniques. At first, the average classification accuracy of the proposed DCNN model was compared with the transfer learning techniques, and the result is illustrated in Figure 8.
[figure(s) omitted; refer to PDF]
The classification accuracy comparison result shows that the proposed DCNN model achieved better accuracy than the AlexNet, DenseNet201, InceptionV3Net, MobileNetV2, NASNet Large, ResNet152, VGG19Net, and XceptionNet models. The proposed DCNN model achieved a classification accuracy of 98.99% in test data, which is 4.27% higher than the second-best performed model named DenseNet201. Also, the proposed DCNN model was trained on the original and augmented datasets separately. The test accuracy of the original dataset trained and augmented dataset trained DCNN models is compared in Figure 9. The comparison result proves that the augmented dataset increased the performance of the proposed DCNN model by 16.27% from the original dataset trained model.
[figure(s) omitted; refer to PDF]
As well, the average precision of the proposed DCNN model is compared with the transfer learning techniques in Figure 10. The comparison result shows that the average precision of the proposed DCNN model achieved 98.51% on the grey blight leaf disease dataset. The precision score of the proposed DCNN model was 6.43% higher than the DenseNet201 model and much higher than other transfer learning techniques.
[figure(s) omitted; refer to PDF]
In addition, the proposed DCNN model achieved a recall score of 98.48% on the grey blight test dataset. The comparison of the recall score of the proposed DCNN and existing techniques is shown in Figure 11. The comparison result illustrates that the proposed DCNN model achieved a better recall score than the AlexNet, DenseNet201, InceptionV3Net, MobileNetV2, NASNet Large, ResNet152, VGG19Net, and XceptionNet models.
[figure(s) omitted; refer to PDF]
Furthermore, the F measure of the proposed DCNN and existing models on the grey blight dataset is illustrated in Figure 12. The proposed DCNN model achieved an F measure score of 98.48% on test data. The F measure of the proposed DCNN model was better than the transfer learning techniques such as AlexNet, DenseNet201, InceptionV3Net, MobileNetV2, NASNet Large, ResNet152, VGG19Net, and XceptionNet. The F measure score of the proposed DCNN model was 6.44% higher than the Dense201 model on test data.
[figure(s) omitted; refer to PDF]
The misclassification rate is also one of the important performance metrics, which predicts the percentage of samples that were incorrectly classified by the models. Figure 13 shows the misclassification rate of the proposed DCNN and existing models on test data. The proposed DCNN models reached a 1.01% of misclassification rate on the grey blight dataset. The misclassification rate of the proposed DCNN model was very lesser than other techniques.
[figure(s) omitted; refer to PDF]
The Receiver operating characteristic (ROC) curve represents the performance of the classification models on all the classification thresholds [36]. The RoC curves of the proposed and transfer learning techniques for individual classes in the dataset are shown in Figure 14. The area under the ROC curve of the proposed DCNN on the grey blight disease class was 97%. The comparison graph shows that the area under the ROC curve of the proposed model for the grey blight disease class was higher than the transfer learning techniques.
[figure(s) omitted; refer to PDF]
The comparison results show that the classification accuracy, precision, recall, F- measure, and RoC of the proposed DCNN model was superior to recent transfer learning techniques such as AlexNet, DenseNet201, InceptionV3Net, MobileNetV2, NASNet Large, ResNet152, VGG19Net, and XceptionNet.
Similarly, the classification performance of the proposed DCNN model was compared with the existing state-of-the-art tea leaf disease detection models such as Improved_Deep_CNN [37], AX-RetinaNet [31], and MergeModel [38]. The existing models were trained and tested on the tea grey blight disease dataset. Figure 15 shows the performance comparison of the proposed and existing models on grey blight disease detection in tea leaves.
[figure(s) omitted; refer to PDF]
The comparison result illustrates that the proposed DCNN model performed better than the existing tea leaf disease detection model on the grey blight disease detection task. The confusion matrix and class-wise performance of the existing techniques are given in the supplementary materials. The subsequent section discussed the conclusions and future directions of the research on grey blight disease detection in tea crops.
5. Conclusions
Grey blight is one of the most yield-affecting diseases of tea crops worldwide. Tea plantation regions are generally surface areas and mountainous terrains. It is difficult to diagnose the disease in the entire area manually. This article proposed a novel deep convolutional neural network (DCNN) for the automatic diagnosis of grey blight disease in tea crops. The tea leaf data were collected for the training and testing process of the DCNN model in the North Bengal region of India. Also, the data augmentation techniques were used to increase the number of samples in the training dataset. Principal component analysis (PCA), random rotations, random shifts, random flips, resizing, and rescaling were used to produce the augmented images for the training data. The dataset consists of three classes such as grey blight, healthy, and other diseases. There are 4752 images used for the training and 264 images used for the validation process of the proposed DCNN model, respectively. The model was trained and validated on the dataset till 1000 training epochs. The model achieved an accuracy of 99.27% on the validation data. The performance of the DCNN model was tested using test data from 264 images after the completion of the training process. The test performance of the proposed model was compared with AlexNet, DenseNet201, InceptionV3Net, MobileNetV2, NASNet Large, ResNet152, VGG19Net, and XceptionNet. Also, the proposed DCNN model was compared with the existing state-of-the-art tea leaf disease detection models. The comparison result shows that the classification performance of the proposed DCNN was superior to the transfer learning techniques and existing tea leaf disease detection models on grey blight detection in tea crops. The DCNN model achieved the classification accuracy, precision, recall, and F measure which are 98.99%, 98.51%, 98.48%, and 98.49%, respectively. The proposed DCNN achieved a very less misclassification rate of 1.01%, which was less than other techniques. In the future, the proposed model will be deployed in the unmanned aerial vehicle for diagnosing grey blight disease in larger surface areas and mountainous terrains. Diagnosing other yield-controlling diseases of the tea crops and enhancing the classification performance of the proposed model are also the future directions of the research.
Acknowledgments
The authors thankfully acknowledge Dr. A Babu, the Director of the Tea Research Association, Tocklai Tea Research Institute, Assam, India, for his continuous support and encouragement.
[1] S. Sen, M. Rai, D. Das, S. Chandra, K. Acharya, "Blister blight a threatened problem in tea industry: a review," Journal of King Saud University Science, vol. 32 no. 8, pp. 3265-3272, DOI: 10.1016/j.jksus.2020.09.008, 2020.
[2] N. Muraleedharan, Z. M. Chen, "Pests and diseases of tea and their management," J.Plantation Crops, vol. 25, pp. 15-43, 1997.
[3] S. D. Joshi, R. Sanjay, U. I. Baby, A. K. A. Mandal, "Molecular characterization of Pestalotiopsis spp. associated with tea (Camellia sinensis) in southern India using RAPD and ISSR markers," Indian Journal of Biotechnology, vol. 8, pp. 377-383, 2009.
[4] J. Chen, J. Jia, “Automatic Recognition of Tea Diseases Based on Deep Learning” Advances in Forest Management under Global Change,DOI: 10.5772/intechopen.91953, 2020.
[5] S. Sladojevic, M. Arsenovic, A. Anderla, D. Culibrk, D. Stefanovic, "Deep neural networks based recognition of plant diseases by leaf image classification," Computational Intelligence and Neuroscience, vol. 201,DOI: 10.1155/2016/3289801, 2016.
[6] J. A. Pandian, V. D. Kumar, O. Geman, M. Hnatiuc, M. Arif, K. Kanchanadevi, "Plant disease detection using deep convolutional neural network," Applied Sciences, vol. 12, pp. 6982-14, DOI: 10.3390/app12146982, 2022.
[7] J. A. Pandian, K. Kanchanadevi, V. D. Kumar, E. Jasinska, R. Gono, Z. Leonowicz, M. Jasinski, "A five convolutional layer deep convolutional neural network for plant leaf disease detection," Electronics, vol. 11 no. 8,DOI: 10.3390/electronics11081266, 2022.
[8] B. ChandraKarmokar, M. Samawat Ullah, M. Kibria Siddiquee, K. Rokibul Alam, "Tea leaf diseases recognition using neural network ensemble," International Journal of Computer Application, vol. 114 no. 17, pp. 27-30, DOI: 10.5120/20071-1993, 2015.
[9] R. S. Latha, G. R. Sreekanth, R. C. Suganthe, R. Rajadevi, "Automatic detection of tea leaf diseases using deep convolution neural network," Proceedings of the 2021 International Conference on Computer Communication and Informatics (ICCCI),DOI: 10.1109/ICCCI50826.2021.9402225, .
[10] J. Arun Pandian, K. Kanchanadevi, "An improved deep convolutional neural network for detecting plant leaf diseases," Wiley Concurrency and Computation: Practice and Experience, vol. 34 no. 28,DOI: 10.1002/cpe.7357, 2022.
[11] A. Pandian, N. R. Rajalakshmi, G. Arulkumaran, "An improved deep residual convolutional neural network for plant leaf disease detection," Computational Intelligence and Neuroscience, vol. 2022,DOI: 10.1155/2022/5102290, 2022.
[12] J. Chen, Q. He, "Tea disease spot recognition based on image feature points extraction and matching," Global NEST Journal, vol. 22 no. 4, pp. 492-501, DOI: 10.30955/gnj.003375, 2020.
[13] S.-H. Lee, S.-R. Lin, S.-F. Chen, "Identification of tea foliar diseases and pest damage under practical field conditions using a convolutional neural network," Plant Pathology, vol. 69 no. 9, pp. 1731-1739, DOI: 10.1111/ppa.13251, 2020.
[14] J. Chen, Q. Liu, L. Gao, "Visual tea leaf disease recognition using a convolutional neural network model," Symmetry, vol. 11 no. 3,DOI: 10.3390/sym11030343, 2019.
[15] X. Zou, Q. Ren, H. Cao, Y. Qian, S. Zhang, "Identification of tea diseases based on spectral reflectance and machine learning," Journal of Information Processing Systems, vol. 16 no. 2, pp. 435-446, DOI: 10.3745/JIPS.02.0133, 2020.
[16] Y. J. Chen, L. Zeng, Q. Meng, H. R. Tong, "Occurrence of Pestalotiopsis lushanensis causing grey blight disease on camellia sinensis in China," Plant Disease, vol. 102 no. 12,DOI: 10.1094/pdis-04-18-0640-pdn, 2018.
[17] S. S. N. Maharachchikumbura, K. D. Hyde, J. Z. Groenewald, J. Xu, P. W. Crous, "Pestalotiopsis revisited," Studies in Mycology, vol. 79 no. 1, pp. 121-186, DOI: 10.1016/j.simyco.2014.09.005, 2014.
[18] T. Shizuoka-ken, T. E. S. Horikawa Kikugawa, "Yield loss of new tea shoots due to tea gray blight caused by Pestalotia longiseta Spegazzini," Bulletin of the Shizuoka Tea Experiment Station, vol. 12, 1986.
[19] F. Liu, L. Hou, M. Raza, L. Cai, "Pestalotiopsis and allied genera from Camellia, with description of 11 new species from China," Scientific Reports, vol. 7 no. 1,DOI: 10.1038/s41598-017-00972-5, 2017.
[20] I. Tsai, C. L. Chung, S. R. Lin, T. H. Hung, T. L. Shen, C. Y. Hu, W. N. Hozzein, H. A. Ariyawansa, "Cryptic diversity, molecular systematics, and pathogenicity of genus Pestalotiopsis and allied genera causing gray blight disease of tea in taiwan, with a description of a new Pseudopestalotiopsis species," Plant Disease, vol. 105 no. 2, pp. 425-443, DOI: 10.1094/pdis-05-20-1134-re, 2021.
[21] P. Deepalakshmi, T. Prudhvi Krishna, S. Siri Chandana, K. Lavanya, P. N. Srinivasu, "Plant leaf disease detection using CNN algorithm," International Journal of Information System Modeling and Design, vol. 12 no. 1,DOI: 10.4018/ijismd.2021010101, 2021.
[22] V. S. Dhaka, S. V. Meena, G. Rani, D. Sinwar, K. Kavita, M. F. Ijaz, M. Wozniak, "A survey of deep convolutional neural networks applied for prediction of plant leaf diseases," Sensors, vol. 21 no. 14,DOI: 10.3390/s21144749, 2021.
[23] L. Yuan, P. Yan, W. Han, Y. Huang, B. Wang, J. Zhang, H. Zhang, Z. Bao, "Detection of anthracnose in tea plants based on hyperspectral imaging," Computers and Electronics in Agriculture, vol. 167,DOI: 10.1016/j.compag.2019.105039, 2019.
[24] S. Mukhopadhyay, M. Paul, R. Pal, D. De, "Tea leaf disease detection using multi-objective image segmentation," Multimedia Tools and Applications, vol. 80 no. 1, pp. 753-771, DOI: 10.1007/s11042-020-09567-1, 2021.
[25] G. Hu, H. Wang, Y. Zhang, M. Wan, "Detection and severity analysis of tea leaf blight based on deep learning," Computers & Electrical Engineering, vol. 90,DOI: 10.1016/j.compeleceng.2021.107023, 2021.
[26] S. Prabu, B. R. T. Bapu, S. Sridhar, V. Nagaraju, "Tea plant leaf disease identification using hybrid filter and support vector machine," Classifier Technique BT - Recent Advances in Internet of Things and Machine Learning: Real-World Applications, pp. 117-128, 2022.
[27] S.-H. Lee, C.-C. Wu, S.-F. Chen, "Development of image recognition and classification algorithm for tea leaf diseases using convolutional neural network," ASABE Annual International Meeting, vol. 1, 2018.
[28] X. M. Chen, C. C. Lin, S. R. Lin, S.-F. Chen, "Application of region-based convolution neural network on tea diseases and harming insects identification," American Society of Agricultural and Biological Engineers Annual International Meeting ASABE, vol. 4, pp. 2193-2199, 2021.
[29] S. Krishnan Jayapal, S. Poruran, "Enhanced disease identification model for tea plant using deep learning," Intelligent Automation & Soft Computing, vol. 35 no. 1, pp. 1261-1275, DOI: 10.32604/iasc.2023.026564, 2023.
[30] R. S. Latha, G. R. Sreekanth, R. C. Suganthe, R. Rajadevi, "Automatic detection of tea leaf diseases using deep convolution neural network," Proceedings of the 2021 International Conference on Computer Communication and Informatics (ICCCI), .
[31] W. Bao, T. Fan, G. Hu, D. Liang, H. Li, "Detection and identification of tea leaf diseases based on AX-RetinaNet," Scientific Reports, vol. 12 no. 1,DOI: 10.1038/s41598-022-06181-z, 2022.
[32] T.-X. Jiang, T.-Z. Huang, X.-L. Zhao, T.-H. Ma, "Patch-based principal component analysis for face recognition," Computational Intelligence and Neuroscience, vol. 2017,DOI: 10.1155/2017/5317850, 2017.
[33] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L.-C. Chen, "MobileNetV2: inverted residuals and linear bottlenecks," Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4510-4520, .
[34] K. Simonyan, A. Zisserman, "Very deep convolutional networks for large-scale image recognition," Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015), .
[35] G. Geetharamani, J. Arun Pandian, "Identification of plant leaf diseases using a nine-layer deep convolutional neural network," Computers & Electrical Engineering, vol. 76, pp. 323-338, DOI: 10.1016/j.compeleceng.2019.04.011, 2019.
[36] A. Vulli, P. N. Srinivasu, M. S. K. Sashank, J. Shafi, J. Choi, M. F. Ijaz, "Fine-tuned DenseNet-169 for breast cancer metastasis prediction using FastAI and 1-cycle policy," Sensors, vol. 22 no. 8,DOI: 10.3390/s22082988, 2022.
[37] G. Hu, X. Yang, Y. Zhang, M. Wan, "Identification of tea leaf diseases by using an improved deep convolutional neural network," Sustainable Computing: Informatics and Systems, vol. 24,DOI: 10.1016/j.suscom.2019.100353, 2019.
[38] G. Hu, M. Fang, "Using a multi-convolutional neural network to automatically identify small-sample tea leaf diseases," Sustainable Computing: Informatics and Systems, vol. 35,DOI: 10.1016/j.suscom.2022.100696, 2022.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2023 J. Arun Pandian et al. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/
Abstract
We proposed a novel deep convolutional neural network (DCNN) using inverted residuals and linear bottleneck layers for diagnosing grey blight disease on tea leaves. The proposed DCNN consists of three bottleneck blocks, two pairs of convolutional (Conv) layers, and three dense layers. The bottleneck blocks contain depthwise, standard, and linear convolution layers. A single-lens reflex digital image camera was used to collect 1320 images of tea leaves from the North Bengal region of India for preparing the tea grey blight disease dataset. The nongrey blight diseased tea leaf images in the dataset were categorized into two subclasses, such as healthy and other diseased leaves. Image transformation techniques such as principal component analysis (PCA) color, random rotations, random shifts, random flips, resizing, and rescaling were used to generate augmented images of tea leaves. The augmentation techniques enhanced the dataset size from 1320 images to 5280 images. The proposed DCNN model was trained and validated on 5016 images of healthy, grey blight infected, and other diseased tea leaves. The classification performance of the proposed and existing state-of-the-art techniques were tested using 264 tea leaf images. Classification accuracy, precision, recall, F measure, and misclassification rates of the proposed DCNN are 98.99%, 98.51%, 98.48%, 98.49%, and 1.01%, respectively, on test data. The test results show that the proposed DCNN model performed superior to the existing techniques for tea grey blight disease detection.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details





1 School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
2 Department of Bio-Technology, Vel Tech Rangarajan Dr.Sagunthala R&D Institute of Science and Technology, Chennai, India
3 Department of Computer Science & Engineering, Vel Tech Rangarajan Dr.Sagunthala R&D Institute of Science and Technology, Chennai, India
4 Department of Mycology & Microbiology, Tea Research Association, North Bengal Regional R & D Center, Nagrakata-735225, Jalpaiguri, West Bengal, India
5 Department of Computer Science and Engineering, American International University, Dhaka, Bangladesh