1. Introduction
In Rwanda, agriculture accounts for a third of the GDP (gross domestic product) and makes up most jobs (approximately 80%) [1]. Additionally, a significant source of export value, particularly from the production of tea and coffee, accounts for more than 20% of Rwanda’s overall exports by value across all sectors: more than $100 million/year [2].
Coffee is a $60 million industry in Rwanda that is primarily supplied by small-holder growers in the country’s several agroecological zones. Along with the supply chain, the estimated 350,000 farmers whose livelihoods depend on growing coffee face jeopardy [3]. Therefore, the government has a top priority for the future development of this cash crop for export. Among the varieties of coffee plants in Rwanda, coffee arabica is the one shown promising resistance to climate change.
Small-scale farmers are primarily responsible for cultivating coffee, utilizing farming methods that involve fragmented land and numerous small plots spread across hilly areas. Typically, farmers own around two to six plots, depending on the number of coffee trees in each plot. Due to the scattered nature of these plots and the distance between them and the farmers’ homes, the frequency of plant and land management activities is reduced. In addition, the mix-up of different crops with coffee along with separate small farms contributes to the spread of coffee leaf diseases. To rearrange land usage patterns, the Ministry of Agriculture and Animal Resources is executing a policy for land consolidation. Apart from the land management policies, the local farmers working unprofessionally are recommended to work cooperatively. This exercise helps them to get support from government agencies, such as training, and other inputs impacting the high quality of coffee production [3].
It has been reported that one of the crops at risk from climate change and the spread of disease/pest infections is coffee [4]. Furthermore, these circumstances arise from a variety of fungal species and other causes. The disease-causing agents, present on the leaves or other parts of the tree, are highly transmissible and can rapidly spread if not promptly addressed. According to the study, approximately 10% of the global plant economy is currently being impacted by the destructive consequences of plant infections and infestations [5].
Coffee farmers in Rwanda, like those in other regions, face continuous threats from various pests and diseases [6]. While some of these problems are minor and have a limited impact on crop yield and quality, others, such as coffee berry disease, coffee leaf rust, and coffee wilt disease (tracheomycosis), pose significant dangers. These serious diseases can not only affect individual farmers, but also have a major economic impact on countries or regions heavily reliant on coffee for foreign exchange earnings [7]. For instance, coffee wilt disease has been present in Africa since the 1920s, but since the 1990s, there have been widespread and recurring outbreaks. This results in substantial losses in countries such as Uganda, where over 14 million coffee trees have been destroyed, as well as in the Democratic Republic of Congo [8,9]. Once this disease takes hold on a farm, it becomes extremely challenging to control. Since coffee is a perennial crop, certain pests and diseases can survive and multiply throughout the growing season, continuously affecting the coffee plants, although their populations and impact may vary over time [10]. Other pests and diseases may only attack coffee during periods when conditions are favorable. Regardless, the damage caused by these pests and diseases can be significant, affecting both crop yield and quality [11].
Some pests and diseases, such as the white coffee stem borer, coffee wilt disease, parasitic nematodes, and root mealy bugs, could kill coffee plants outright. On the other hand, pests, such as the coffee berry borer, green scales, leaf rust, and brown eye spot, may not directly kill the plants but can severely hinder their growth by causing defoliation, ultimately impacting the quality of the coffee berries [12].
The process of diagnosing plant diseases is complex and entails tasks such as analyzing symptoms, recognizing patterns, and conducting various tests on leaves. These procedures require significant time, resources, and skills to complete [13]. In many instances, an incorrect diagnosis can result in plants developing immunity or reduced susceptibility to treatment. The intricacy of plant disease diagnosis has led to a decrease in both the quantity and quality of crop yields among farmers [14].
The drawn-out process frequently results in a widespread infection with significant losses [15]. Coffee is one of the most well-known drinks in the world and might go extinct without conservation, monitoring, and seed preservation measures, according to scientists. Global warming, deforestation, illness, and pests are all factors in the decline [16]. By implementing effective crop protection systems, early monitoring and accurate diagnosis of crop diseases can be achieved, which, in turn, can help prevent losses in production quality.
Recognizing various types of coffee plant diseases is of utmost significance and is deemed a critical concern. Timely detection of coffee plant diseases can lead to improved decision-making in agricultural production management. Infected coffee plants typically exhibit noticeable marks or spots on their stems, fruits, leaves, or flowers. Importantly, each infection and pest infestation leaf have distinct patterns that can be utilized for diagnosing abnormalities. The identification of plant diseases necessitates expertise and human resources. Moreover, the process of manually examining and identifying the type of plant infection is subjective and time-consuming. Additionally, there is a possibility that the disease identified by farmers or experts could be misleading at times [17]. As a result, the use of an inappropriate pesticide or treatment might occur during the evaluation of plant diseases, ultimately leading to a decline in crop quality and potentially causing environmental pollution.
The application of computer vision and artificial intelligence (AI) technologies has been expressed as instrumental tools in combating plant diseases [18,19,20]. There are multiple methods available to address the problem of detecting plant infections with the help of technologies, as the initial signs of infection manifest as various spots and patterns on leaves [21]. The introduction of machine learning and deep learning techniques has led to significant advancements in plant disease recognition, revolutionizing research in this field. These techniques have facilitated automatic classification and feature extraction, enabling the representation of original image characteristics. Moreover, the availability of datasets, GPU machines, and software supporting complex deep learning architectures with reduced complexity has made the transition from traditional methods to deep learning platforms feasible. CNNs have particularly gained widespread attention due to their remarkable capabilities in recognition and classification. CNNs excel in extracting intricate low-level features from images, making them a preferred choice for replacing traditional methods in automated plant disease recognition and yielding improved outcomes [22].
The research problem is based on the numerous efforts of government agencies and farmers in the use of manual methods to detect coffee diseases. In addition, a huge monetary effort is used to train farmers in coffee disease identification. However, the trained methods result in wrong findings [23]. To remedy the detected diseases, they may use the wrong pesticides, which do not treat the matter but affect environmental degradation.
This study aimed to develop and train five deep learning models on the collected dataset of coffee arabica leaves and determine the best model yielding the best results by leveraging pre-trained models and transferring knowledge approaches. The objective was to identify the most effective transfer learning technique for achieving accurate classification and optimal recognition accuracy in a multi-class coffee leaf disease context. The main contributions of this study are (1) to assess, collect, and classify the coffee leaves dataset in the Rwandan context; (2) to apply different data preprocessing techniques on the labeled data set; and (3) to determine the best transfer learning technique for achieving the most accurate classification and optimal recognition on multi-class plant diseases.
The remaining sections of the paper are structured as follows. Section 2 details the related works of this research. Section 3 outlines the materials and methods employed in this study. The findings and results are presented in Section 4. Section 5 delves into the discussion of the various experiments conducted. Finally, in Section 6, the research concludes by summarizing the key points and outlining potential future directions for research.
2. Related Works
Several methods have been suggested by researchers to achieve the precise detection and classification of plant infections. Some of these methods employ conventional image processing techniques that involve manual feature extraction and segmentation [24]. Among many methods, the use of K-means clustering for image leaf segmentation by extracting infected regions and later performing classification using a multi-class support vector machine is investigated [25]. The probabilistic neural network method was used to extract methodologies with statistical features on cucumber plant infection [26]. The preprocessing of images, from red, green, and blue (RGB) conversion to gray; HE; K-means clustering; and contour tracing is computed, and the results are used for classifications using support vector machine (SVM), K-NN, and convolutional neural networks (CNN). The experiment was carried out on tomato leaf infection detection [27] and grapes [28]. The automatic detection of leaf damage on coffee leaves has been conducted using image segmentation with Fuzzy C-means clustering applied to the V channel of the YUV color space image [29]. The automatic identification and classification of plant diseases and pests as well as the severity assessment, specifically focusing on coffee leaves in Brazil, is investigated. They targeted two specific issues: leaf rust caused by Hemileia vastatrix and leaf miner caused by Leucoptera coffee. Various image processing techniques were employed, including image segmentation using the K-means algorithm, the Otsu method, and the iterative threshold method, performed in the YCgCr color space. Texture and color attributes were calculated for feature extraction. For classification purposes, an artificial neural network trained with backpropagation and an extreme learning machine was utilized. The images utilized were captured using an ASUS Zenfone 2 smartphone (ZE551ML) with a resolution of 10 Megapixels (4096 × 2304 pixels). The database used in the study consisted of 690 images [30].
Moreover, the existing models heavily depend on manual feature engineering techniques, classification methods, and spot segmentation. However, with the advent of artificial intelligence in the field of computer vision, researchers have increasingly utilized machine learning [31] and deep learning [32] models to improve recognition accuracy significantly.
A CNN-based predictive model for classification and image processing in paddy plants is proposed [33]. Similarly, the utilization of a CNN for disease detection in paddy fields using convolutional neural networks with four to six layers to classify various plant species is elaborated on [34]. The application of CNN with a transfer learning approach to classify, recognize, and segment different plant diseases is tested [35]. Although CNNs have been extensively used with promising results, there is a lack of diversity in the datasets employed [36]. To achieve the best outcomes, training deep learning models with larger and more diverse datasets is crucial. While previous studies have demonstrated significant achievements, there is still room for improvement in terms of dataset diversity, particularly in capturing realistic images from actual agricultural fields with diverse backgrounds.
Deep-learning models based on CNNs have gained popularity in image-based research due to their effectiveness in learning intricate low-level features from images. However, training deep CNN layers can be computationally intensive and challenging. To address these issues, researchers have proposed transfer learning-based models [37,38,39]. These models leverage pre-trained networks, such as VGG-16, ResNet, DenseNet, and Inception [40], which have been well-established and widely used in the field. Transfer learning allows for the models to leverage the knowledge gained from pre-training on large datasets, enabling faster and more efficient training on specific image classification tasks.
The focus of the automatic and accurate estimation of disease severity to address concerns related to food security, disease management, and yield loss prediction was investigated on beans [41]. They applied deep learning techniques to analyze images of Apple black rot from the Plant Village dataset, which is caused by the fungus Botryosphaeria obtusa. The study compared the performance of different deep learning models, including VGG16, VGG19, Inception-v3, and ResNet50. The results demonstrated that the deep VGG16 model, trained with transfer learning, achieved the highest accuracy of 90.5% on the hold-out test set.
The classification of cotton leaves based on leaf hairiness (pubescence) used a four-part deep learning model named HairNet. HairNet demonstrated impressive performance, achieving 89% accuracy per image and 95% accuracy per leaf. Furthermore, the model successfully classified the cotton based on leaf hairiness, achieving an accuracy range of 86–99% [42]. A deep learning approach was developed to automate the classification of diseases in banana leaves. The researchers utilized the LeNet architecture, a CNN through a 3700 image dataset. The implementation of the approach utilized deeplearning4j, an open-source deep-learning library that supports GPUs. The experiment was applied to detect two well-known banana diseases, namely Sigatoka and Speckle [35].
The application of emerging technologies, such as image processing, machine learning, and deep learning in the agriculture sector, is transforming the industry, leading to increased productivity, sustainability, and profitability while reducing environmental impact. A lot of authors have investigated different algorithms for different or specific plant types to ensure common solutions; however, the solution is problem-specific. It has been observed that most of the modeling has been attempted on the Plant Village dataset [43] to check the performance of the models selected.
Table 1 showcases different methods used for plant leaf classification, along with the corresponding accuracy percentages achieved on different types of leaves. The “Proposed model” refers to DenseNet, which obtained an accuracy of 99.57% on coffee leaf classification.
3. Materials and Methods
For proper plant disease management, early detection of diseases in coffee leaves is required to facilitate farmers. This section provides a complete description of the methodology used to collect coffee leaves and the methods used to experiment with the modeling techniques. Discussion of the process to collect leaves and several transfer-learning algorithms have been elaborated on to investigate the best model responding to the research scope. The architecture and training process of each model with the experimental setup on the used dataset is also discussed.
Rwanda has many high mountains and steep-sloped hills, with much of the farmland suffering from moderate to severe soil erosion, and the appearance of coffee diseases and pest are based on climate variability [50]. Among different types of coffee plants, such as arabica and robusta [51], this study focuses on the most popular variety known as arabica [52] that exists in Rwanda. We surveyed and visited 10 coffee washing stations located in different 5 districts, such as Ngoma, Rulindo, Gicumbi, Rutsiro, and Huye. The districts selected represent all 27 districts of Rwanda caring about climate variations [50]. In each district, we sampled 30 farmers giving 150 sample sizes. The visit was done to cooperate with agronomists who know the coffee pests and diseases to support coffee leaf labeling activities and to engage farmers to assess if they have the capacity to identify different coffee leaf diseases. The visit was attempted in the harvesting session, which is in March 2021, and in the summer session, which is in June and July 2021. The dataset images were collected from four distinct provinces located in the Eastern (sunny region with high low altitude with no hills), Northern (the cold region with high altitude), Southern (the cold region with modulated altitude), and Western (cold, highlands with high altitude). The quantitative and qualitative methodology was adopted to investigate the disease occurrence distribution in Rwanda as shown in Figure 1.
According to our respondents, coffee leaf rust, known as “coffee leaf rust”, is the most dangerous disease ravaging coffee in Rwanda. As shown in Figure 1, the disease occurs mostly in June, July, and August.
The process of data collection was followed by the experiment of coffee disease detection using deep learning techniques. Figure 2 details the architectural flow of the implementation.
The suggested pipeline for detecting coffee leaf diseases begins with preparing the dataset and concludes with making predictions using different models and comparison analysis. To accomplish this, the Python 3.10 programming language, TensorFlow 2.9.1, numpy version 1.19.2, and matplotlib version 3.5.2 libraries were employed for dataset preparation and development environment setup. Those tools have proven to be useful for data preprocessing and modeling purposes [53,54]. The experiment used CNN deep learning models, such as InceptionV3, Resnet50, VGG16, Xception, and DenseNet models. The experiment used infrastructure with an HP Z240 workstation equipped with two Intel(R) Xeon(R) Gold 6226R and Tesla V100s 32GB memory NVIDIA GPU of 64 cores in total, which significantly accelerated the training process of deep neural networks. In the subsequent sections, each stage of the proposed coffee plant leaf disease detection pipeline will be thoroughly discussed.
3.1. Dataset
The researchers collected 37,939 images dataset in RGB format. The coffee images had at least four classes in the dataset, namely the class rust, red spider mite, miner, and healthy. The dataset’s classes were made up of these directories, each of which corresponded to a certain disease.
Figure 3 shows the details of the sample dataset classes used in the experiment. Due to the severity of the matter, in a specific class, you may find different images with similar infections at different stages. This is because, at a certain stage, the model can be able to track and classify the real name or approximate name of the diseases.
Before supplying the images from the dataset to the CNN architectures, we preprocessed them to make sure the input parameters matched the requirements of the CNN model. Each input image was downsized to 224 × 224 dimensions after preprocessing. To guarantee that all the data were described under the same distribution, normalization (i.e., image/255.0) was then applied, which improved training convergence and stability [55].
3.2. Used Deep Learning Models
In the following section, this study details all the different models and tools used. The modeling of the coffee leaf images was conducted using different deep-learning techniques, such as InceptionV3, Resnet50, VGG16, Xception, and DenseNet as shown in Figure 2.
3.2.1. InceptionV3
InceptionV3, developed by Google Research, belongs to the Inception model series, and serves as a deep convolutional neural network structure. Its primary purpose is to facilitate image recognition and classification assignments [56,57,58].
Its architecture is known for its deep structure and the use of Inception modules. These modules consist of parallel convolutional layers with different filter sizes, allowing the network to capture features at multiple scales. By incorporating these parallel branches, the model can effectively handle both local and global features in an image [59]. One of the key innovations in InceptionV3 is the use of 1 × 1 convolutions, which serve as bottleneck layers. These 1 × 1 convolutions help reduce the number of input channels and computational complexity, making the network more efficient.
The Inception V3 model consists of a total of 42 layers, surpassing the layer count of its predecessors, Inception V1 and V2. Nonetheless, the efficiency of this model is remarkable [60]. It can be fine-tuned on specific datasets or used as a feature extractor in transfer learning scenarios, where the pre-trained weights are utilized to extract meaningful features from images and train a smaller classifier on top of them. With its powerful deep learning architecture that excels in image recognition and classification tasks, this model was selected in this study to investigate its performance.
3.2.2. ResNet50
This research experiment suggested the use of ResNet-50 as Residual Network-50 introduced by Microsoft Research [61]. It is a variant of the ResNet family of models, which are renowned for their ability to train very deep neural networks by mitigating the vanishing gradient problem. It is known for its residual connection enabling the network to learn residual mappings instead of directly learning the desired underlying mapping. The residual connections facilitate passing information from earlier layers directly to later layers, helping to alleviate the degradation problem caused by increasing network depth.
The ResNet-50 architecture consists of 50 layers, including convolutional layers, pooling layers, fully connected layers, and shortcut connections. It follows a modular structure, where residual blocks with varying numbers of convolutional layers are stacked together [62]. Each residual block includes a set of convolutional layers, followed by batch normalization and activation functions, with the addition of the original input to the block. This ensures that the gradient flows through the skip connections and facilitates the learning of residual mappings.
The model was applied to plant disease detection [63,64] by extracting contextual dependencies within images, focusing on essential features of disease identification. The method was chosen to take advantage of its learning of residual mappings and feed the model with the coffee image classes and their features. The pre-training enables the model to learn generic visual features that can be transferred to different image-related tasks.
3.2.3. VGG16
The Visual Geometry Group 16 (VGG16) is a convolutional neural network architecture developed by the Visual Geometry Group at the University of Oxford. It is known for its simplicity and effectiveness in the image classification tasks model [65].
The VGG16 architecture consists of 16 layers, including 13 convolutional layers and 3 fully connected layers. It follows a sequential structure, where convolutional layers are stacked together with max pooling layers to progressively extract features from input images. The convolutional layers use small 3 × 3 filters, which help capture local patterns and details in the images [66]. The architecture maintains a consistent configuration throughout the network, with the number of filters increasing as the spatial dimensions decrease. This uniformity simplifies the implementation and enables the straightforward transfer of learned weights to different tasks [67].
The pre-training model of VGG16 enables the model to learn general visual representations, fine-tuned or used as feature extractors for specific tasks. Its deep structure and small receptive field have been considered in this research context to capture hierarchical features in coffee leaf images and avail all possible found classes.
3.2.4. Xception
Detailed as Extreme Inception, a deep convolutional neural network architecture introduced by François Chollet, the creator of Keras [68]. The model is based on the Inception architecture but incorporates key modifications to improve its performance and efficiency. Its architecture aims to enhance the depth-wise separable convolutions introduced in Inception modules. In depth-wise separable convolutions, the spatial convolution and channel-wise convolution are decoupled, reducing the number of parameters and computational complexity.
The architecture of Xception introduces the notation of an extreme version of Inception, where the traditional convolutional layer is replaced by a depth-wise separable convolution. The extreme version of the Inception module enables it to capture spatial and channel-wise information more effectively. Xception has been pre-trained on large-scale image classification datasets, such as ImageNet, and has demonstrated impressive performance in various computer vision tasks [69].
It is used as a feature extractor or fine-tuned on specific datasets, enabling it to generalize well to various image-related tasks.
3.2.5. DenseNet
Dense Convolutional Network is a deep convolutional neural network architecture known for its dense connectivity pattern and efficient parameter sharing [70]. This sharing facilitates feature reuse and gradient flow throughout the network. It uses the concept of dense blocks, where each layer is connected to every other layer in a feed-forward manner. DenseNet takes this concept further by concatenating feature maps from all previous layers. This dense connectivity pattern enables direct connections between layers at different depths, facilitating the flow of information and gradients through the network [71].
The DenseNet architecture consists of dense blocks followed by transition layers. A dense block is a series of convolutional layers, where each layer’s input is concatenated with the feature maps of all preceding layers. Transition layers are used to down-sample feature maps and reduce spatial dimensions. This architecture enables the model to capture both local and global features effectively.
The operational mechanism of a dense block as shown in Figure 4, supports the subsequent layers by applying batch normalization (BN), ReLu activation, convolution, and pooling to modify the outcome. It has achieved state-of-the-art results on various image classification benchmarks. In the coffee leaves context, the DenseNet model has been used to classify the leaf based on the list of trained dataset classes.
3.3. Performance Measurement
The experimental setup has been conducted using the methodology, methods, and infrastructures discussed in the above sections. To measure the performance of the transfer learning techniques, different metrics were considered. The performance accuracy matrix, precision-recall metric, and receiver operating characteristic (ROC), with the area under the curve (AUC), are being used to evaluate segmentation performance. The performance of the classifier is measured using evaluation metrics to select the best-performing ones for further use.
3.3.1. Precision-Recall Curve
The confusion matrix is a useful tool for assessing performance by comparing actual and predicted values. It provides insights into sensitivity, which represents the true positive rate and indicates the ability to correctly identify healthy and diseased leaves. Precision–recall curves are used in binary classification to study the output of a classifier.
To extend the precision–recall curve and average precision to multi-class or multi-label classification, it was necessary to binarize the output. One curve could be drawn per label, but one could also draw a precision–recall curve by considering each element of the label indicator matrix as a binary prediction (micro-averaging).
(1)
(2)
The performance evaluation of plant disease classification involved analyzing the output, which could be binary or multiclass. Specificity, accuracy was referred to as the positive predicted value and defined in Equation (1). Recall, also known as the probability of detection, was calculated by dividing the number of correctly classified positive outcomes by the total number of positive outcomes (Equation (2)).
3.3.2. Receiver Operating Characteristic (ROC) Curve
The curve is mainly used to understand deterministic indicators of categorization sorting and computational modeling issues. ROC curves feature true positive rate (TPR) on the Y axis and false positive rate (FPR) on the X axis. The meaning is that the top left corner of the plot is the “ideal” point—a FPR of zero and a TPR of one. This is not very realistic, but it does mean that a larger area under the curve (AUC) is usually better. The “steepness” of ROC curves is important since it is ideal to maximize the TPR while minimizing the FPR. ROC curves are typically used in binary classification, where the TPR and FPR can be defined unambiguously.
Average precision (AP) summarizes such a plot as the weighted mean of precisions achieved at each threshold, with the increase in recall from the previous threshold used as the weight:
(3)
where and are the precision and recall at the nth threshold. A pair is referred to as an operating point. AP and the trapezoidal area under the operating points are calculated using the function sklearn.metrics.auc of Python package to summarize a precision–recall curve that led to different results.3.3.3. Matthews Correlation Coefficient (MCC)
As an alternate approach that is not influenced by the problem of imbalanced datasets, the Matthews correlation coefficient is a technique involving a contingency matrix. This method calculates the Pearson product-moment correlation coefficient [72] between predicted and actual values. It is expressed in Equation (4) where TP is true positive.
(4)
(Worst value: −1; best value: +1)
MCC stands out as the sole binary classification measure that yields a substantial score solely when the binary predictor effectively predicts most of the positive and negative data instances accurately [73]. It assumes values within the range of −1 to +1. The extreme values of −1 and +1 signify completely incorrect classification and flawless classification, respectively. Meanwhile, MCC = 0 is the anticipated outcome for a classifier akin to tossing a coin.
3.3.4. F1 Scores
Among the parametric group of F-measures, which is named after the parameter value β = 1, the F1 score holds the distinction of being the most frequently employed metric. It is determined as the harmonic average of precision and recall (refer to the formulas (1) and (2)), and its shape is expressed in the Equation (5):
(5)
(Worst value: −1; best value: +1)
The F1 score spans the interval [0, 1], with the lowest value achieved when TP (true positives) equals 0, signifying the misclassification of all positive samples. Conversely, the highest value emerges when FN (false negatives) and FP (false positives) both equal 0, indicating flawless classification. There are two key distinctions that set apart the F1 score from MCC and accuracy: firstly, F1 remains unaffected by TN (true negatives), and secondly, it does not exhibit symmetry when classes are swapped.
4. Results
In this study, each experiment involved evaluating the training accuracy and testing accuracy. The losses incurred during the testing and training phases were computed for every model. The collected coffee leaves dataset was utilized to train the DCNN with transfer learning models. The selected pre-trained models are ResNet-50, Inception V3, VGG-16, Xception, and DenseNet.
4.1. Description of Dataset
To conduct our experimental analysis, the dataset was partitioned into three subsets: training samples, testing samples, and validation samples. Among the coffee plant leaf disease classes, a total of 37,939 images were available and trained with a ratio of 80:10:10. Out of these, 30,053 samples were used for training, 3793 for validation, and 4093 for testing. It is important to note that all these sets, including the training, testing, and validation sets, encompassed all four classes representing coffee plant leaf diseases used in this research context.
4.2. Preprocessing and Data Augmentation
The dataset consisted of four diseases of one type of crop species (coffee arabica). For our experimental purposes, we utilized color images from the collected dataset, as it was shown that they aligned well with the transfer learning models. To ensure compatibility with different pre-trained network models that require varying input sizes, the images were downscaled to a standardized format of 256 × 256 pixels. For VGG-16, DenseNet-121, Xception, and ResNet-50, the input size was set to 224 × 224 × 3 (height, width, and channel depth) while for Inception V3, the input shape was 299 × 299 × 3.
Although the dataset contained many images, approximately 37,939, depicting various coffee leaf diseases, these images accurately represented real-life images captured by farmers using different image acquisition techniques, such as high-definition cameras and smartphones, and downloaded from the internet. Due to the substantial size of the dataset, there was a risk of overfitting. To overcome the overfitting, regularization techniques were employed, including data augmentation after preprocessing.
In order to maintain the data augmentation capabilities, this study applied several transformations to the preprocessed images. Those transformations include clockwise and anticlockwise rotation, horizontal and vertical flipping, zoom intensity, and rescaling of the original images. This technique not only prevented overfitting and reduced model loss, but also enhanced the model’s robustness, resulting in improved accuracy when tested with the real-life coffee plant images.
4.3. Network Architecture Model
The selection of pre-trained network models was based on their suitability for the task of plant disease classification. Detailed information about the architecture of each model can be found in Table 2. These models employ different filter sizes to extract specific features from the feature maps. The filters play a crucial role in the process of feature extraction. Each filter, when convolved with the input, extracts distinct features, and the specific features extracted from the feature maps depend on the values assigned to the filters. This research experiment utilized the original pre-trained network models, incorporating the specific combinations of convolution layers and filter sizes employed in each model.
Table 2 provides various parameters for different network models, including InceptionV3, Xception, ResNet50, VGG16, and DenseNet. The parameters include the total number of layers, max pool layers, dense layers, dropout layers, flatten layers, filter size, stride, and trainable parameters. These parameters are essential in understanding the architecture and complexity of each model.
In our experiment, each model was standardized with a learning rate of 0.01, a dropout rate of 2, and had four output classes for classification.
The coffee leaves dataset was divided into training, testing, and validation samples. For training the Inception V3, VGG16, ResNet50, Xception, and DenseNet models, 80% of the coffee leaf samples were utilized. Each model underwent ten epochs, and it was observed that all models started to converge with high accuracy after four epochs. The recognition accuracy of the InceptionV3 model is illustrated in Figure 5a, reaching a training accuracy of 99.34%. Figure 5b depicts the log loss of the InceptionV3 model.
During this research experiment, the second model considered is the ResNet50 model from the same dataset. Following the standardization of hyperparameters, the model underwent training using 80% of the dataset. Subsequently, 10% of the samples were allocated for testing while the remaining 10% were utilized for validation and testing purposes. From Figure 6a, it can be observed that the model recognition accuracy is around 96% in the first three epochs, and therefore, its stability increased to get an accuracy of 98.70%. This performance is lower than the one represented by InceptionV3 shown in Figure 5. On the other hand, the training and validation losses of the ResNet50 model were around 0.056% and 0.057%, respectively.
Figure 7 demonstrates the behavior of the Xception model on the used datasets after adjusting the hyperparameters. The training and validation accuracy reached 99.40% and 98.84%, respectively, with around four epochs showing less steadiness. Its training and validation losses are shown to be 0.014% and 0.033%, respectively. This execution surpasses that of what the ResNet50 demonstrated, as delineated in Figure 6.
The VGG16 model was used as the fourth model using the same dataset. After standardizing the hyperparameters, the model was trained with 80% of the dataset. Subsequently, 10% of the samples were allocated for testing while the remaining 10% were used for validation and testing purposes. By considering Figure 8a, it can be observed that the model achieved a recognition accuracy of approximately 98% in the initial four epochs, and it gradually increased to reach an accuracy of 98.81%. This performance is less than that of the Xception model, as depicted in Figure 6. Furthermore, the training and validation losses of the VGG16 model were approximately 0.0291% and 0.066%, respectively.
Figure 9 demonstrates the behavior of the DenseNet model on the used datasets after adjusting the hyperparameters. The training and validation accuracy reached 99.57% and 99.09%, respectively, with around four epochs showing less steadiness. Its training and validation losses are shown to be 0.0135% and 0.0225%, respectively. This execution surpasses that of all other demonstrated models.
Figure 10 depicts the behaviors of all five used models on the collected dataset of coffee leaf diseases using Receiver Operating Characteristic (ROC) Curves. It is used to understand indicators of categorization sorting and computational modeling challenges. The curves feature true positive rate (TPR) on the Y axis and false positive rate (FPR) on the X-axis.
It illustrates how the true positive rate (the percentage of correctly classified lesion images) and false positive rate (the percentage of incorrectly classified non-lesion images) change as the classifier’s threshold for distinguishing between lesions and non-lesions is adjusted while evaluating test set images.
Figure 11 illustrates the performance of the five employed models on the gathered coffee leaf diseases dataset using precision–recall curves. These curves help serve as a measure to assess the effectiveness of a classifier, especially in situations where there is a significant class imbalance. These curves depict the balance between precision, which gauges the relevance of results, and recall, which measures the comprehensiveness of the classifier’s performance.
Figure 12 depicts the performance comparison of the five employed models on the gathered coffee leaf diseases dataset using F1 score and MCC metrics. The graph shows the efficiency of the DenseNet Model with an F1 score and MCC of 0.98 and 0.94, respectively. The second proven model is to be VGG16 with an F1 score and MCC of 0.9 and 0.89, respectively. The worst model on the used dataset is shown to be Xception with the F1 score and MCC of 0.48 and 0.4, respectively.
Table 3 provides a comparison of different network models based on their training and validation performance.
Regarding statistical examination, the ANOVA (Analysis of Variance) test has been executed, and the outcomes are exhibited in Table 4.
The outcomes shown in Table 4 reveal a noteworthy distinction in the selected deep learning algorithm compared to the other methods. This is evident from the ANOVA results provided. The “Treatment” row (which corresponds to the differences between columns) exemplifies this with a substantial F-value of 233.333 and an extremely low p-value, less than 0.0001. It is noteworthy that the residual variance is merely 0.002, indicating limited variability among the diverse methods. This suggests that the variation observed in the outcome measure is primarily attributed to the effect of the chosen technique. The variance in the outcome measure was computed across all groups and amounted to 0.031 in the sum of squares. While the ANOVA outcomes point to the superior performance of the selection algorithm concerning the outcome measure compared to other methods, it is important to acknowledge that this is a preliminary observation.
The findings do not provide insights into the magnitude or direction of the effect, nor do they elucidate the specific differences between DenseNet and alternative methods. To ascertain if two samples are extracted from a common population, one can employ a non-parametric method known as the Wilcoxon signed-rank test.
The outcomes of this examination are exhibited in Table 5. Within this table, the assessment aimed to compare the efficacy of the presented models on the dataset.
In our comprehensive assessment of the five deep learning models for image classification, we conducted an in-depth analysis to discern their unique capabilities on top of different optimization methods. The results, presented in Table 5, reveal subtle distinctions among these models. Notably, statistical tests, including the Wilcoxon Signed-Rank Test, indicate statistically significant differences in their median performance scores. However, it is crucial to emphasize that these differences, while statistically significant, are practically negligible. Each of the five models, namely InceptionV3, ResNet, DenseNet, VGG16, and Xception, consistently delivered competitive results, reflecting the maturity and robustness of contemporary deep learning architectures. Our study highlights nuanced performance differences while emphasizing the pivotal balance between statistical significance and practical utility, ultimately leading us to select DenseNet as the optimal choice for our image classification task. Nevertheless, it is essential to acknowledge the overall excellence demonstrated by each model, showcasing the prowess of contemporary deep-learning techniques.
5. Discussion
In the farming industry, especially for coffee plantations, caring about the importance of coffee consumption worldwide and the drawbacks of coffee diseases and pests affecting production, timely detection of diseases is crucial for achieving high yields. To support improving productivity, the incorporation of the latest technologies is needed for the early diagnosis of coffee diseases from leaves. The literature survey suggested that using deep learning models contributed efficiently to image classification while transfer learning-based models are effective in reducing training computation complexity by addressing the need for extensive datasets. Therefore, this study reveals the application of five pre-trained models in the Rwandan coffee leaf disease dataset to measure performances and provide advice for portable hand-held devices to facilitate farmers.
The performances of models, such as Inception V3, Xception, VGG-16, ResNet-50, and DenseNet, are evaluated with different metrics to identify the most suitable model for the accurate classification of coffee plant leaf diseases. The evaluation metrics, such as ROC, and precision–recall values, were measured.
Figure 9 illustrates a graphical representation of the pre-trained network models based on the evaluation metric, such as ROC. The VGG16 and DenseNet present good performance compared to other used models on all disease classes. The AUC for all discussed diseases in this context appeals to be in the range of 0.5 to 1. This indication means that the model can correctly classify coffee rust, minor, health, and red spider mites surveyed to be abundant in Rwanda. To tackle the problem of vanishing gradients induced by skip connections, we utilized regularization methods, such as batch normalization. The use of deeper models presented several difficulties, such as overfitting, covariant shifts, and longer training times. To surmount these obstacles, we conducted experiments to finely adjust the hyperparameters.
The assessment of model performance was measured using the AP metric as shown in Equation (3). In the performed experiment of the dataset used on the selected pre-trained models, Figure 10 shows the results of different models. The illustration demonstrated that DenseNet and VGG16 have better AP for the used classes than InceptionV3, Xception, and ResNet50. DenseNet demonstrates AP values of 51%, 40%, 0%, and 3% for health, miner, and red spider mite class, respectively. VGG16 demonstrates AP values of 52%, 45%, 1%, and 2% for health, miner, and red spider mite class, respectively. The VGG16 expressed to grab some detections on red spider mites compared to others. The observation is that lack of enough images in this class. The evaluation outcomes revealed that DenseNet and VGG16 performed better than InceptionV3, Xception, and ResNet50 models.
Table 1 presents different research references, the year of publication, the methods used, accuracy percentages, and the corresponding plant names for leaf classification. The “proposed model” labeled as DenseNet achieved the highest accuracy of 99.57% in classifying coffee leaves. Table 3 shows the comparison of different models and their score accuracies. The training accuracy and loss represent how well the models performed on the training data while the validation accuracy and loss show their performance on previously unseen validation data. Among the models, DenseNet achieved the highest training accuracy (99.57%) and validation accuracy (99.09%), indicating its excellent ability to learn and generalize from the data. On the other hand, ResNet50 had the lowest validation accuracy (97.80%) and the highest validation loss (0.0577), suggesting it might slightly struggle to generalize to new data compared to the other models. To emphasize the model evaluation criteria, we performed statistical tests with ANOVA and Wilcoxon, as shown in Table 4 and Table 5, to check the variability of models on our dataset. It reaffirms our decision to choose the ‘DenseNet’ model based on a comprehensive evaluation of various factors, including not only ANOVA or Wilcoxon tests, but also median discrepancies and other metrics discussed.
6. Conclusions and Future Directions
In this study, we investigated the coffee farming industry in Rwanda, focusing on various identified coffee leaf diseases. Our research involved a successful analysis of different transfer learning models, specifically chosen to accurately classify five distinct classes of coffee plant leaf diseases. We standardized and evaluated cutting-edge deep learning models using transfer learning techniques, considering the classification accuracy, precision, recall, and AP score as the evaluation metrics. After analyzing several pre-trained architectures, including InceptionV3, Xception, and ResNet50, we found that DenseNet and VGG16 performed exceptionally well. Based on our findings, we proposed a model training pipeline that was followed throughout the experiment.
DenseNet model training was found to be more straightforward, primarily attributed to its smaller number of trainable parameters and lower computational complexity. This quality makes DenseNet particularly well-suited for coffee plant leaf disease identification, especially when incorporating new coffee leaf diseases that were not part of the initial training data, as it reduces the overall training complexity. The experimented model’s quality has been tested using statistical tests, such as Wilcoxon and ANOVA. The proposed model demonstrated exceptional performance, achieving an impressive classification accuracy of 99.57%, along with high values for AUC and AP metrics.
In our future endeavors, we aim to tackle challenges associated with real-time data collection. We plan to develop a multi-object deep learning model capable of detecting coffee plant leaf diseases not just from individual leaves, but also from a bunch of leaves as well. Moreover, we are currently working on the implementation of a mobile application that will leverage the trained model obtained from this study. This application will provide valuable assistance to farmers and the agricultural sector by enabling the real-time identification of leaf diseases in Rwanda based on the samples taken.
Conceptualization, E.H. and G.B.; methodology, E.H., E.M., G.B., S.M.M. and P.R.; software, E.H., G.B. and J.N.; validation, E.H., J.N. and G.B.; formal analysis, E.H., S.M.M. and O.J.S.; investigation, E.H., M.C.A.K., J.M., E.M., J.A.U.U., L.C.C. and T.M.; resources, E.H.; data curation, J.N.; writing—original draft preparation, E.H.; writing—review and editing, E.H., G.B., J.C.U. and M.C.A.K.; visualization, J.N.; supervision, G.B. and P.R.; project administration, E.H.; funding acquisition, O.J.S. and P.R. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
When requested, the authors will make available all data used in this study.
This work is acknowledged by the Rwanda Agricultural Board (RAB) to avail the farming cooperatives operating in Rwanda and the coffee washing stations.
The authors declare no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 3. Few sampled coffee leaf image datasets. (a) Rust infection; (b) Red spider mites’ infection in different stages; (c) Health leaf; (d) Miner infections.
Figure 5. InceptionV3 model performance analysis using the collected dataset. (a) Model training and validation accuracy; (b) Model training and validation loss.
Figure 6. ResNet50 model performance analysis using the collected dataset. (a) Model training and validation accuracy; (b) Model training and validation loss.
Figure 7. Xception model performance analysis using the collected dataset. (a) Model training and validation accuracy; (b) Model training and validation loss.
Figure 8. VGG16 model performance analysis using the collected dataset. (a) Model training and validation accuracy; (b) Model training and validation loss.
Figure 9. DenseNet model performance analysis using the collected dataset. (a) Model training and validation accuracy; (b) Model training and validation loss.
Figure 10. Receiver Operating Characteristic (ROC) Curves. (a) Details the behaviors of the InceptionV3 model; (b) Details the behaviors of the ResNet model; (c) Details the behaviors of the Xception model; (d) Details the behaviors of the VGG16 model; (e) Demonstrates the behaviors of the DenseNet model.
Figure 11. Precision–Recall curves of the tested models. (a) Details the behaviors of the InceptionV3 model; (b) Details the behaviors of the ResNet model; (c) Details the behaviors of the Xception model; (d) Details the behaviors of the VGG16 model; (e) Demonstrates the behaviors of the DenseNet model.
Comparison of our resulting model with existing deep learning models.
Ref. No and Year | Method | Accuracy (%) | Plant Name |
---|---|---|---|
[ |
Proposed FCNN & SCNN Hybrid Principal | 92.01 | Crop Leaf |
[ |
Component Analysis | 95.10 | Plant Leaf |
[ |
Hybris PCA & Optimization Algorithm | 90.20 | Olive Leaf |
[ |
ResNet50 | 99.00 | Okra Leaf |
[ |
Deep CNN | 98.00 | Coffee Leaf |
[ |
Deep Transfer EffientNet | 98.70 | Grape Leaf |
Proposed model | DenseNet | 99.57 | Coffee Leaf |
Pre-trained network architecture models’ parameters.
Parameters | InceptionV3 | Xception | ResNet50 | DenseNet | VGG16 |
---|---|---|---|---|---|
Total layers | 314 | 135 | 178 | 430 | 22 |
Max pool layers | 4 | 4 | 1 | 1 | 5 |
Dense layers | 2 | 2 | 2 | 2 | 2 |
Drop-out layers | - | - | 2 | - | 2 |
Flatten layers | - | - | 1 | - | 1 |
Filter size | 1 × 1, 3 × 3, 5 × 5 | 3 × 3 | 3 × 3 | 3 × 3, 1 × 1 | 3 × 3 |
Stride | 2 × 2 | 2 × 2 | 2 × 2 | 2 × 2 | 1 |
Trainable parameters | 23,905,060 | 22,963,756 | 25,689,988 | 8,091,204 | 15,244,100 |
Summary of network models comparison of performance analysis from the coffee leaf dataset.
Network Models | Training Accuracy (%) | Training Loss (%) | Validation Accuracy (%) | Validation Loss (%) |
---|---|---|---|---|
InceptionV3 | 99.34 | 0.0167 | 99.01 | 0.0306 |
ResNet50 | 98.70 | 0.0565 | 97.80 | 0.0577 |
Xception | 99.40 | 0.0140 | 98.84 | 0.0337 |
VGG16 | 98.81 | 0.0291 | 97.53 | 0.0668 |
DenseNet | 99.57 | 0.0135 | 99.09 | 0.0225 |
The results of the analysis of variance test.
ANOVA Table | SS | DF | MS | F-Value | p-Value |
---|---|---|---|---|---|
Treatment (between columns) | 0.029 | 4 | 0.007 | 233.3333 | p < 0.0001 |
Residual (within columns) | 0.002 | 75 | 0.0003 | ||
Total | 0.031 | 79 |
The results of the Wilcoxon signed-rank test.
DTO + DT | PSO + DT | GWO + DT | GA + DT | |
---|---|---|---|---|
Theoretical median | 5.75 × 10−8 | 5.75 × 10−8 | 5.75 × 10−8 | 5.75 × 10−8 |
Actual median | 3.57 × 10−5 | 3.57 × 10−5 | 3.57 × 10−5 | 3.57 × 10−5 |
Number of values | 37,964 | 37,964 | 37,964 | 37,964 |
Wilcoxon Signed-Rank Test | 0 | 0 | 0 | 0 |
Sum of signed ranks (W) | 37,891 | 37,891 | 37,891 | 37,891 |
Sum of positive ranks | 1,682,355 | 1,682,355 | 1,682,355 | 1,682,355 |
Sum of negative ranks | −1,644,464 | −1,644,464 | −1,644,464 | −1,644,464 |
p-value (two-tailed) | 0 | 0 | 0 | 0 |
Exact or estimate? | Exact | Exact | Exact | Exact |
Significant (alpha = 0.05)? | Yes | Yes | Yes | Yes |
How big is the discrepancy? | 3.56 × 10−5 | 5.06 × 10−8 | 9.14 × 10−8 | 4.83 × 10−8 |
References
1. World Bank. Agricultural Development in Rwanda. Available online: https://www.worldbank.org/en/results/2013/01/23/agricultural-development-in-rwanda#:~:text=Agriculture%20is%20crucial%20for%20Rwanda’s,of%20the%20country’s%20food%20needs (accessed on 19 June 2023).
2. The Republic of Rwanda, Ministry of Trade, and Industry. Revised National Export Strategy. Available online: https://rwandatrade.rw/media/2015%20MINICOM%20National%20Export%20Strategy%20II%20(NES%20II).pdf (accessed on 19 June 2023).
3. Nzeyimana, I.; Hartemink, A.E.; de Graaff, J. Coffee Farming and Soil Management in Rwanda. Outlook Agric.; 2013; 42, pp. 47-52. [DOI: https://dx.doi.org/10.5367/oa.2013.0118]
4. Nurihun, B.A. The Relationship between Climate, Disease and Coffee Yield: Optimizing Management for Smallholder Farmers. Ph.D. Thesis; Ecology and Evolution at Stockholm University: Stockholm, Sweden, 2023; Available online: https://su.diva-portal.org/smash/get/diva2:1749585/FULLTEXT01.pdf (accessed on 19 June 2023).
5. Balodi, R.; Bisht, S.; Ghatak, A.; Rao, K.H. Plant Disease Diagnosis: Technological Advancements and Challenges. Indian Phytopathol.; 2017; 70, pp. 275-281. [DOI: https://dx.doi.org/10.24838/ip.2017.v70.i3.72487]
6. World Bank Group. Agriculture Global Practice Note, Rwanda Agricultural Sector Risk Assessment. December 2015; Available online: https://documents1.worldbank.org/curated/en/514891468197095483/pdf/102075-BRI-P148140-Box394821B-PUBLIC-Rwanda-policy-note-web.pdf (accessed on 21 June 2023).
7. Kifle, B.; Demelash, T. Climatic Variables and Impact of Coffee Berry Diseases (Colletotrichum Kahawae) in Ethiopian Coffee Production. J. Biol. Agric. Healthc.; 2015; 5, 7.
8. Flood, J.; Cabi, U. Coffee wilt disease. Burleigh Dodds Ser. Agric. Sci.; 2021; 96, pp. 319-342. [DOI: https://dx.doi.org/10.19103/AS.2021.0096.25]
9. Phiri, N.; Baker, P.S. CAB International. The Status of Coffee Wilt Disease in Africa. 2009; Available online: https://assets.publishing.service.gov.uk/media/57a08b4040f0b652dd000bb4/Coffee_CH02.pdf (accessed on 21 June 2023).
10. The Abundance of Pests and Diseases in Arabica Coffee Production Systems in Uganda—Ecological Mechanisms and Spatial Analysis in the Face of Climate Change. 2017; Available online: https://agritrop.cirad.fr/584976/1/PhD_Thesis_TL_2017.pdf (accessed on 21 June 2023).
11. Bigirimana, J.; Uzayisenga, B.; Gut, L.J. Population distribution and density of Antestiopsis thunbergia (Hemiptera: Pentatomidae) in the coffee growing regions of Rwanda in relation to climatic variables. Crop. Prot.; 2019; 122, pp. 136-141. [DOI: https://dx.doi.org/10.1016/j.cropro.2019.04.029]
12. Wikifarmer. Coffee Major Pest and Diseases and Control Measures. Available online: https://wikifarmer.com/coffee-major-pest-and-diseases-and-control-measures/ (accessed on 21 June 2023).
13. Riley, M.; Williamson, M.; Maloy, O. Plant Disease Diagnosis. Plant Health Instr.; 2002; 10, pp. 193-210. [DOI: https://dx.doi.org/10.1094/PHI-I-2002-1021-01]
14. Miller, S.A.; Beed, F.D.; Harmon, C.L. Plant Disease Diagnostic Capabilities and Networks. Annu. Rev. Phytopathol.; 2009; 47, pp. 15-38. [DOI: https://dx.doi.org/10.1146/annurev-phyto-080508-081743]
15. Badel, J.L.; Zambolim, L. Coffee bacterial diseases: A plethora of scientific opportunities. Plant Pathol.; 2018; 68, pp. 411-425. [DOI: https://dx.doi.org/10.1111/ppa.12966]
16. Food and Agriculture Organization of the United Nationals. Climate Change and Food Security: Risks and Responses. 2015; Available online: https://www.fao.org/3/i5188e/I5188E.pdf (accessed on 19 June 2023).
17. Dawod, R.G.; Dobre, C. Upper and Lower Leaf Side Detection with Machine Learning Methods. Sensors; 2022; 22, 2696. [DOI: https://dx.doi.org/10.3390/s22072696]
18. Vu, D.L.; Nguyen, T.K.; Nguyen, T.V.; Nguyen, T.N.; Massacci, F.; Phung, P.H. HIT4Mal: Hybrid image transformation for malware classification. Trans. Emerg. Telecommun. Technol.; 2019; 31, e3789. [DOI: https://dx.doi.org/10.1002/ett.3789]
19. Shaikh, R.P.; Dhole, S.A. Citrus Leaf Unhealthy Region Detection by Using Image Processing Technique. Proceedings of the IEEE International Conference on Electronics, Communication and Aerospace Technology; Coimbatore, India, 20–22 April 2017; pp. 420-423.
20. Yu, K.; Lin, L.; Alazab, M.; Tan, L.; Gu, B. Deep Learning-Based Traffic Safety Solution for a Mixture of Autonomous and Manual Vehicles in a 5G-Enabled Intelligent Transportation System. IEEE Trans. Intell. Transp. Syst.; 2020; 22, pp. 4337-4347. [DOI: https://dx.doi.org/10.1109/TITS.2020.3042504]
21. Khan, M.A.; Akram, T.; Sharif, M.; Javed, K.; Raza, M.; Saba, T. An automated system for cucumber leaf diseased spot detection and classification using improved saliency method and deep features selection. Multimed. Tools Appl.; 2020; 79, pp. 18627-18656. Available online: https://link.springer.com/article/10.1007/s11042-020-08726-8 (accessed on 24 June 2023). [DOI: https://dx.doi.org/10.1007/s11042-020-08726-8]
22. Karthik, R.; Hariharan, M.; Anand, S.; Mathikshara, P.; Johnson, A.; Menaka, R. Attention embedded residual CNN for disease detection in tomato leaves. Appl. Soft Comput.; 2019; 86, 105933. [DOI: https://dx.doi.org/10.1016/j.asoc.2019.105933]
23. Giraddi, S.; Desai, S.; Deshpande, A. Deep Learning for Agricultural Plant Disease Detection. Lecture Notes in Electrical Engineering; Kumar, A.; Paprzycki, M.; Gunjan, V. ICDSMLA 2019 Springer: Singapore, 2020; Volume 601, [DOI: https://dx.doi.org/10.1007/978-981-15-1420-3_93]
24. Scientist, D.; Bengaluru, T.M.; Nadu, T. Rice Plant Disease Identification Using Artificial Intelligence. Int. J. Electr. Eng. Technol.; 2020; 11, pp. 392-402.
25. Dubey, S.R.; Jalal, A.S. Adapted Approach for Fruit Disease Identification using Images. Image Processing: Concepts, Methodologies, Tools, and Applications; IGI Global: Hershey, PA, USA, 2013; pp. 1395-1409. [DOI: https://dx.doi.org/10.4018/978-1-4666-3994-2.ch069]
26. Yun, S.; Xianfeng, W.; Shanwen, Z.; Chuanlei, Z. PNN-based crop disease recognition with leaf image features and meteorological data. Int. J. Agric. Biol. Eng.; 2015; 8, pp. 60-68.
27. Harakannanavar, S.S.; Rudagi, J.M.; Puranikmath, V.I.; Siddiqua, A.; Pramodhini, R. Plant leaf disease detection using computer vision and machine learning algorithms. Glob. Transit. Proc.; 2022; 3, pp. 305-310. [DOI: https://dx.doi.org/10.1016/j.gltp.2022.03.016]
28. Li, G.; Ma, Z.; Wang, H. Image Recognition of Grape Downy Mildew and Grape. Proceedings of the International Conference on Computer and Computing Technologies in Agriculture; Beijing, China, 29–31 October 2011; pp. 151-162.
29. Hitimana, E.; Gwun, O. Automatic Estimation of Live Coffee Leaf Infection Based on Image Processing Techniques. Comput. Sci. Inf. Technol.; 2014; 19, pp. 255-266. [DOI: https://dx.doi.org/10.5121/csit.2014.4221]
30. Manso, G.L.; Knidel, H.; Krohling, R.A.; Ventura, J.A. A smartphone application to detection and classification of coffee leaf miner and coffee leaf rust. arXiv; 2019; arXiv: 1904.00742
31. Rauf, H.T.; Saleem, B.A.; Lali, M.I.U.; Khan, M.A.; Sharif, M.; Bukhari, S.A.C. A citrus fruits and leaves dataset for the detection and classification of citrus diseases through machine learning. Data Brief; 2019; 26, 104340. [DOI: https://dx.doi.org/10.1016/j.dib.2019.104340]
32. Sujatha, R.; Chatterjee, J.M.; Jhanjhi, N.; Brohi, S.N. Performance of deep learning vs machine learning in plant leaf disease detection. Microprocessor. Microsyst.; 2021; 80, 103615. [DOI: https://dx.doi.org/10.1016/j.micpro.2020.103615]
33. Barbedo, J.G.A. Factors influencing the use of deep learning for plant disease recognition. Biosyst. Eng.; 2018; 172, pp. 84-91. [DOI: https://dx.doi.org/10.1016/j.biosystemseng.2018.05.013]
34. Vardhini, P.H.; Asritha, S.; Devi, Y.S. Efficient Disease Detection of Paddy Crop using CNN. Proceedings of the 2020 International Conference on Smart Technologies in Computing, Electrical and Electronics (ICSTCEE); Bengaluru, India, 9–10 October 2020; pp. 116-119.
35. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using Deep Learning for Image-Based Plant Disease Detection. Front. Plant Sci.; 2016; 7, 1419. [DOI: https://dx.doi.org/10.3389/fpls.2016.01419]
36. Panigrahi, K.P.; Sahoo, A.K.; Das, H. A CNN Approach for Corn Leaves Disease Detection to Support Digital Agricultural System. Proceedings of the 4th International Conference on Trends in Electronics and Information; Tirunelveli, India, 15–17 June 2020; pp. 678-683.
37. Tan, C.; Sun, F.; Kong, T.; Zhang, W.; Yang, C.; Liu, C. A Survey on Deep Transfer Learning. Proceedings of the 27th International Conference on Artificial Neural Networks; Rhodes, Greece, 4–7 October 2018; pp. 270-279. [DOI: https://dx.doi.org/10.1007/978-3-030-01424-7_27]
38. Andrew, J.; Fiona, R.; Caleb, A.H. Comparative Study of Various Deep Convolutional Neural Networks in the Early Prediction of Cancer. Proceedings of the 2019 International Conference on Intelligent Computing and Control Systems (ICCS); Madurai, India, 15–17 May 2019; pp. 884-890. [DOI: https://dx.doi.org/10.1109/iccs45141.2019.9065445]
39. Onesimu, J.A.; Karthikeyan, J. An Efficient Privacy-preserving Deep Learning Scheme for Medical Image Analysis. J. Inf. Technol. Manag.; 2020; 12, pp. 50-67. [DOI: https://dx.doi.org/10.22059/JITM.2020.79191]
40. Too, E.C.; Yujian, L.; Njuki, S.; Yingchun, L. A comparative study of fine-tuning deep learning models for plant disease identification. Comput. Electron. Agric.; 2019; 161, pp. 272-279. [DOI: https://dx.doi.org/10.1016/j.compag.2018.03.032]
41. Devaraj, P.; Arakeri, M.P.; Kumar, B.P.V. Early detection of leaf diseases in Beans crop using Image Processing and Mobile Computing techniques. Adv. Comput. Sci. Technol.; 2017; 10, pp. 2927-2945.
42. Qin, F.; Liu, D.; Sun, B.; Ruan, L.; Ma, Z.; Wang, H. Identification of Alfalfa Leaf Diseases Using Image Recognition Technology. PLoS ONE; 2016; 11, e0168274. [DOI: https://dx.doi.org/10.1371/journal.pone.0168274]
43. Geetharamani, G.; Arun Pandian, J. Identification of plant leaf diseases using a nine-layer deep convolutional neural network. Comput. Electr. Eng.; 2019; 76, pp. 323-338. [DOI: https://dx.doi.org/10.1016/j.compeleceng.2019.04.011]
44. Azimi, S.; Kaur, T.; Gandhi, T.K. A deep learning approach to measure stress levels in plants due to Nitrogen deficiency. Measurement; 2020; 173, 108650. [DOI: https://dx.doi.org/10.1016/j.measurement.2020.108650]
45. Gadekallu, T.R.; Rajput, D.S.; Reddy, M.P.K.; Lakshmana, K.; Bhattacharya, S.; Singh, S.; Jolfaei, A.; Alazab, M. A novel PCA–whale optimization-based deep neural network model for classification of tomato plant diseases using GPU. J. Real-Time Image Process.; 2020; 18, pp. 1383-1396. Available online: https://link.springer.com/article/10.1007/s11554-020-00987-8 (accessed on 24 June 2023). [DOI: https://dx.doi.org/10.1007/s11554-020-00987-8]
46. Sinha, A.; Shekhawat, R.S. Olive Spot Disease Detection and Classification using Analysis of Leaf Image Textures. Procedia Comput. Sci.; 2020; 167, pp. 2328-2336. [DOI: https://dx.doi.org/10.1016/j.procs.2020.03.285]
47. Raikar, M.M.; Meena, S.M.; Kuchanur, C.; Girraddi, S.; Benagi, P. Classification and Grading of Okra-ladies finger using Deep Learning. Procedia Comput. Sci.; 2020; 171, pp. 2380-2389. [DOI: https://dx.doi.org/10.1016/j.procs.2020.04.258]
48. Joshi, R.C.; Kaushik, M.; Dutta, M.K.; Srivastava, A.; Choudhary, N. VirLeafNet: Automatic analysis and viral disease diagnosis using deep learning in Vigna mungo plant. Ecol. Inform.; 2020; 61, 101197.Available online: https://linkinghub.elsevier.com/retrieve/pii/S1574954120301473 (accessed on 24 June 2023). [DOI: https://dx.doi.org/10.1016/j.ecoinf.2020.101197]
49. Kaur, P.; Harnal, S.; Tiwari, R.; Upadhyay, S.; Bhatia, S.; Mashat, A.; Alabdali, A.M. Recognition of Leaf Disease Using Hybrid Convolutional Neural Network by Applying Feature Reduction. Sensors; 2022; 22, 575. [DOI: https://dx.doi.org/10.3390/s22020575] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35062534]
50. Innocent Nzeyimana, Optimizing Arabica Coffee Production Systems in Rwanda. 2018; Available online: https://www.researchgate.net/publication/325615794_Optimizing_Arabica_coffee_production_systems_in_Rwanda (accessed on 24 June 2023).
51. Parraga-Alava, J.; Cusme, K.; Loor, A.; Santander, E. RoCoLe: A robusta coffee leaf images dataset for evaluation of machine learning based methods in plant diseases recognition. Data Brief; 2019; 25, 104414. [DOI: https://dx.doi.org/10.1016/j.dib.2019.104414]
52. Coffee Farming in Rwanda: Savoring Success. Contribution to Newsletter 02/2015 of the SDC Agriculture and Food Security Network. Available online: https://www.shareweb.ch/site/Agriculture-and-Food-Security/news/Documents/2015_02_coffee_rwanda_fromm.pdf (accessed on 24 June 2023).
53. Hitimana, E.; Bajpai, G.; Musabe, R.; Sibomana, L.; Kayalvizhi, J. Implementation of IoT Framework with Data Analysis Using Deep Learning Methods for Occupancy Prediction in a Building. Future Internet; 2021; 13, 67. [DOI: https://dx.doi.org/10.3390/fi13030067]
54. Kuradusenge, M.; Hitimana, E.; Hanyurwimfura, D.; Rukundo, P.; Mtonga, K.; Mukasine, A.; Uwitonze, C.; Ngabonziza, J.; Uwamahoro, A. Crop Yield Prediction Using Machine Learning Models: Case of Irish Potato and Maize. Agriculture; 2023; 13, 225. [DOI: https://dx.doi.org/10.3390/agriculture13010225]
55. Koo, K.-M.; Cha, E.-Y. Image recognition performance enhancements using image normalization. Hum. Cent. Comput. Inf. Sci.; 2017; 7, 33. [DOI: https://dx.doi.org/10.1186/s13673-017-0114-5]
56. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, inception-resnet, and the impact of residual connections on learning. Proceedings of the AAAI Conference on Artificial Intelligence 2017; San Francisco, CA, USA, 4–9 February 2017; Volume 31.
57. Joshi, K.; Tripathi, V.; Bose, C.; Bhardwaj, C. Robust Sports Image Classification Using InceptionV3 and Neural Networks. Procedia Comput. Sci.; 2020; 167, pp. 2374-2381. [DOI: https://dx.doi.org/10.1016/j.procs.2020.03.290]
58. Ramcharan, A.; Baranowski, K.; McCloskey, P.; Ahmed, B.; Legg, J.; Hughes, D.P. Deep learning for image-based cassava disease detection. Front. Plant Sci.; 2017; 8, 1852. [DOI: https://dx.doi.org/10.3389/fpls.2017.01852]
59. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J. Rethinking the Inception Architecture for Computer Vision. Available online: https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Szegedy_Rethinking_the_Inception_CVPR_2016_paper.pdf (accessed on 24 June 2023).
60. Inception V3 Model Architecture. Available online: https://iq.opengenus.org/inception-v3-model-architecture/ (accessed on 19 July 2023).
61. Deep Learning. Deep Residual Networks (ResNet, ResNet50)—2023 Guide. Available online: https://viso.ai/deep-learning/resnet-residual-neural-network/ (accessed on 26 June 2023).
62. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Las Vegas, NV, USA, 27–30 June 2016; pp. 770-778. [DOI: https://dx.doi.org/10.1109/CVPR.2016.90]
63. Al-Gaashani, M.S.; Samee, N.A.; Alnashwan, R.; Khayyat, M.; Muthanna, M.S.A. Using a Resnet50 with a Kernel Attention Mechanism for Rice Disease Diagnosis. Life; 2023; 13, 1277. [DOI: https://dx.doi.org/10.3390/life13061277]
64. Giuseppe, G.; Celano, A. A ResNet-50-based Convolutional Neural Network Model for Language ID Identification from Speech Recordings. Proceedings of the Third Workshop on Computational Typology and Multilingual NLP; Online, 20 July 2023; pp. 136-144. Available online: https://aclanthology.org/2021.sigtyp-1.13.pdf (accessed on 26 June 2023).
65. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv; 2014; [DOI: https://dx.doi.org/10.48550/arxiv.1409.1556] arXiv: 1409.1556
66. Tammina, S. Transfer learning using VGG-16 with Deep Convolutional Neural Network for Classifying Images. Int. J. Sci. Res. Publ.; 2019; 9, 9420. [DOI: https://dx.doi.org/10.29322/IJSRP.9.10.2019.p9420]
67. Ter-Sarkisov, A. Network of Steel: Neural Font Style Transfer from Heavy Metal to Corporate Logos. Comput. Sci.; 2020; 1, pp. 621-629. [DOI: https://dx.doi.org/10.5220/0009343906210629]
68. Francois Chollet. Xception: Deep Learning with Depthwise Separable Convolutions. Available online: https://openaccess.thecvf.com/content_cvpr_2017/papers/Chollet_Xception_Deep_Learning_CVPR_2017_paper.pdf (accessed on 26 June 2023).
69. Sutaji, D.; Yıldız, O. LEMOXINET: Lite ensemble MobileNetV2 and Xception models to predict plant disease. Ecol. Inform.; 2022; 70, 101698. [DOI: https://dx.doi.org/10.1016/j.ecoinf.2022.101698]
70. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Honolulu, HI, USA, 21–26 July 2017; pp. 4700-4708.
71. Zhou, T.; Ye, X.; Lu, H.; Zheng, X.; Qiu, S.; Liu, Y. Dense Convolutional Network, and Its Application in Medical Image Analysis. Microsc. Image Anal. Histopathol.; 2022; 2022, 2384830. [DOI: https://dx.doi.org/10.1155/2022/2384830] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35509707]
72. Powers, D.M.W. Evaluation from precision, recall, and F-measure to ROC, informedness, markedness & correlation. J. Mach. Learn. Technol.; 2011; 2, pp. 37-63.
73. Chicco, D. Ten quick tips for machine learning in computational biology. BioData Min.; 2017; 10, pp. 1-17. [DOI: https://dx.doi.org/10.1186/s13040-017-0155-3]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Rwandan coffee holds significant importance and immense value within the realm of agriculture, serving as a vital and valuable commodity. Additionally, coffee plays a pivotal role in generating foreign exchange for numerous developing nations. However, the coffee plant is vulnerable to pests and diseases weakening production. Farmers in cooperation with experts use manual methods to detect diseases resulting in human errors. With the rapid improvements in deep learning methods, it is possible to detect and recognize plan diseases to support crop yield improvement. Therefore, it is an essential task to develop an efficient method for intelligently detecting, identifying, and predicting coffee leaf diseases. This study aims to build the Rwandan coffee plant dataset, with the occurrence of coffee rust, miner, and red spider mites identified to be the most popular due to their geographical situations. From the collected coffee leaves dataset of 37,939 images, the preprocessing, along with modeling used five deep learning models such as InceptionV3, ResNet50, Xception, VGG16, and DenseNet. The training, validation, and testing ratio is 80%, 10%, and 10%, respectively, with a maximum of 10 epochs. The comparative analysis of the models’ performances was investigated to select the best for future portable use. The experiment proved the DenseNet model to be the best with an accuracy of 99.57%. The efficiency of the suggested method is validated through an unbiased evaluation when compared to existing approaches with different metrics.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details






1 Department of Computer and Software Engineering, University of Rwanda, Kigali P.O. Box 3900, Rwanda;
2 Department of Biology, University of Rwanda, Kigali P.O. Box 3900, Rwanda;
3 Department of Computer Science, University of Rwanda, Kigali P.O. Box 2285, Rwanda;
4 African Center of Excellence in Data Science, University of Rwanda, Kigali P.O. Box 4285, Rwanda;
5 Rwanda Agriculture Board, Kicukiro District, Rubilizi, Kigali P.O. Box 5016, Rwanda;
6 Directorate of Grants and Partnership, Kampala International University, Ggaba Road, Kansanga, Kampala P.O. Box 20000, Uganda;
7 Bank of Kigali Plc, Kigali P.O. Box 175, Rwanda;