This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Tomato is one of the main vegetables consumed by humans for its antioxidant content, vitamins (A, B1, B2, B6, C, and E), and minerals such as potassium, magnesium, manganese, zinc, copper, sodium, iron, and calcium [1]. This fruit provides health benefits in the prevention of chronic diseases such as cancer, osteoporosis, and cataracts. One of the main indicators that allows to know the internal composition of the tomato is its degree of maturity. This characteristic is very important to determine the logistic processes of harvest, transport, commercialization, and food consumption. In this respect, the Department of Agriculture of the United States (USDA) establishes six states of maturity that are Green, Breaker, Turning, Pink, Light red and Red ([2]); these are shown in Figure 1.
[figure omitted; refer to PDF]
In literature, there are research on artificial vision that reports methodologies to estimate the maturity states of the tomato that uses the color as a main characteristic. Tomato maturity estimation models have been proposed based on the use of different models of color space. For example, the
On the other hand, the use of the RGB color model has allowed the identification of the maturity of the tomato. As reported by [7], which proposed a methodology to identify red tomatoes for automatic cutting through the use of a robot, this used RGB images analyzed using the relationship between the red-blue component (RB) and red-green (RG) that allowed to formulate the inequalities:
In 2018, [9] developed a system of maturity classification of tomatoes, the system used two types of tomatoes: with defects and without defects. For the fruit’s classification, an artificial backpropagation neural network (BPNN) was used, which was implemented in Matlab©. This system identified the degrees of maturity: red, orange, turning, and green. The architect of the neural network had thirteen inputs that were associated with six functions of color and seven functions of forms, twenty neurons in the hidden layers and one in the output. Reference [10] proposed a method using a BPNN to detect maturity levels (green, orange, and red) of tomatoes of the Roma and Pera varieties.
The color characteristics were extracted from five concentric circles of the fruit, and the average shade values of each subregion were used to predict the level of maturity of the samples; these values were the entries of the BPNN. The average precision to detect the three maturity levels of the tomato samples in this method was 99.31%, and the standard deviation was 1.2%. Reference [11] implemented a classification system based on convolutional neural networks (CNN). The proposed classification architecture was composed of three stages; the first stage managed the color images of three channels that are 200 pixels in height and width. In the second part, it used five layers of CNN that extracted the main characteristics. The convolution kernels are of sizes
Currently, with Computer Vision Systems (CVS) and Fuzzy Logic (FL), applications of maturity classification of tomatoes, guavas, apples, mangoes, and watermelons employee have been developed [12]. FL is an artificial intelligence technique that models human reasoning from the linguistics of an expert to solve a problem. Therefore, the logical processing of the variables is qualitative based on quantitative belonging functions [13]. References [14, 4] argue that the classification of the maturity of the elements of study is composed of two systems that are the identification of color and its labeling. For color representation, they used image histograms based on the RGB, HSI, and CIELab color space models; for the automatic labeling of the fruits, they designed a fuzzy system that handled the knowledge base that was transferred by an expert. On the other hand, the proposal made by [15] estimated the level of maturity in apples using the RGB color space; their methodology used four images of different views of the matrix. They proposed four maturity classes, based on a fuzzy system, which were defined as mature, low mature, near to mature, or too mature. The inputs of the diffuse system were the average values of each color map of the segmented images. Reference [13] developed an image classification system of apple, sweet lime, banana, guava, and orange; the system was implemented in Matlab©. The characteristics extracted from each fruit’s image were area, major and minor axis of each sample; these were used as inputs in the diffuse system for their classification. Another similar study was reported by [16], which implemented a diffuse system to classify the guavas in the stages of maturity raw, ripe, and overripe. The proposed classification was based on the apparent color, and their considered three inputs: hue value, saturation, and luminosity.
Following this trend, this paper reports the behavior of tomato maturity based on color in the RGB model, which is the model with commonly commercial digital cameras work because they are mostly built with an optical Bayer filter on the photosensors. A fuzzy system was used in the classification stage. The main contribution of this work focuses on the comparison of color models for the description of tomato maturity stages. In addition, a Raspberry PI was used for the capture and estimation of the output variables.
2. Materials and Methods
2.1. Sample Preparation
In the proposed method, sixty tomato samples were used (acquired in a local trade) and were classified in six stages of maturity (Green, Breaker, Turning, Rosa, Light red and Red). The classification was based on the criteria of the United States Department of Agriculture USDA (1997). The samples were divided into two groups: the training and validation sets as shown in Table 1.
Table 1
Tomato sample division used to train and detection sets.
Samples per each maturity level | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Green (G) | Breaker (B) | Turning (T) | Pink (P) | Light red (LR) | Red (R) | ||||||
Training | Test | Training | Test | Training | Test | Training | Test | Training | Test | Training | Test |
3 | 2 | 3 | 2 | 2 | 4 | 11 | 5 | 8 | 5 | 13 | 2 |
2.2. Artificial Vision System
Artificial vision systems (AVS) are intended to emulate the functionality of human vision to describe elements in captured images. Some AV’s advantages compared with other proposal are a reduction of cost, improvement of accuracy, increase of precision, and good reliability estimation [14]. Figure 2 shows the AVS system, which is integrated by three sections: (a) the image capture, (b) the lighting subsystem, and (c) the processing subsystem. The first one obtains spatial information and fruit characteristics, the second one maintains the experimental conditions, and the third one obtains several characteristics such as equalizing histograms, highlighting edges, segmenting, labeling components, and tomato maturity [17–19].
[figure omitted; refer to PDF]
The images were acquired with the AVS, which was installed in a black box of dimensions
The proposed system is shown in Figure 3; in the first stage, the RGB images of the samples were acquired. After that, images were segmented to create a vector with averages of the red, green, and blue components, which worked as an input to the fuzzy system.
[figure omitted; refer to PDF]
2.3. Image Acquisition
Four images of each fruit were acquired in each view of the tomato, obtaining a total of 240 images corresponding to 60 fruits. Figure 4 shows the four views of a sample in the green maturity state. The captured images have a resolution of (
[figures omitted; refer to PDF]
2.4. Image Segmentation
Figures 5(a)–5(c) show the process performed on samples using Python 3.7 and OpenCV. The first step captured the images and assigned a maturity level. The second step binarized them in HSV space by using 100<=H<=156, 90<=S<=255, and 0<=V<=255 with a range between 0 and 255. The forth step was to segment each tomato image and labeled them by using an algorithm of the component’s connection. The fifth step was to separate the image segments under 500 pixels, and finally, their respective masks were used to obtain the areas of interest per each sample.
[figures omitted; refer to PDF]
2.5. Attribute Selection
The attributes were selected based on the methodology proposed by [15]. The mean channel’s values of the segmented images were used, and it was also considered that in the initial stages of maturity, the studied tomatoes had a high green content, and its content of red color was very low. As the fruit reached full maturity, the behavior was inverse [14]. The segment mean behavior was mapped by using the image channels of 40 training samples. It also used the RGB color models, the CIELab 1976, and the Minolta
[figure omitted; refer to PDF]
[figure omitted; refer to PDF][figure omitted; refer to PDF]
2.6. Fuzzification
In this stage, fuzzification had the main purpose to translate the input values into linguistic variables [22]. In this proposed system, a vector created by the average values of the RGB components is used as input variable. The input fuzzification was done using triangular membership functions as shown in Figure 9. These functions were selected for their easy hardware implementation.
[figure omitted; refer to PDF]
It is well known that in the first three maturity stages, a greater sensitivity is required to identify the changes compared with the rest of them. Therefore, in this paper, the membership function related to the green variable entries consisted of four sections. On the other hand, three membership functions were proposed for the blue and red cases, which resulted on six maturity states. Finally, the range value, for the most significant input and output stage, was determined by selecting the linguistic states for each variable, i.e., very, medium, and less.
2.7. Fuzzy System Implementation
The fuzzy system was implemented with the Matlab ANFISEDIT Tool and image capture using Raspberry Pi camera, where a set of data was found and integrated by the mean of the RGB channels of the image and the output was labeled for the samples.
Four variants of the fuzzy system were designed to classify the state of maturity of the tomato. In these, several parameters were maintained, which were the inputs of the system, the number of training epcohs, and the type of membership functions. Table 2 shows the architectures used for each fuzzy system and the error obtained after training, where it can be seen that the designs that presented the least errors were Models 3 and 4. The selected membership function is triangular because of its easy implementation.
Table 2
Fuzzy system training results.
Fuzzy system | Inputs | Number of membership functions | Type of membership functions | Epochs | Error |
---|---|---|---|---|---|
Model 1 | Mean RGB component | 3,3,3 | Triangular | 100 | 0.70536 |
Model 2 | Mean RGB component | 3,4,3 | Triangular | 100 | 0.53892 |
Model 3 | Mean RGB component | 7,7,7 | Triangular | 100 | 0.01044 |
Model 4 | Mean RGB component | 10,10,10 | Triangular | 100 |
|
The programing was carried out using the methodology proposed by [23]. The description of each function is shown to follow, where the variable is LR (Low Red), MR (Middle Red), HB (High Red), Low Green (LG), Medium Low Green (MLG), Medium High Green (MHG), High Green (HG), LB Low (Blue), MB (Middle Blue), and HB (High Blue).
2.8. Inferential Logic
The inferential logic was determined by identifying the maximum and minimum averages’ ranges of the RGB components of the training set images. Table 3 shows the maximum and minimum averages of each maturity state according to the USDA. By using the last procedure, it was possible to determine a set of 36 rules that were used in the fuzzy system; the linguistic variables used were low, medium, low average, high, and high average, Table 4.
Table 3
Maximum and minimum range of the averages of the RGB channels for each state of maturity.
Maturity level | Minimum red mean | Maximum red mean | Minimum green mean | Maximum green mean | Minimum blue mean | Maximum blue mean |
---|---|---|---|---|---|---|
Green (G) | 21.5402641 | 23.4607073 | 21.4570773 | 22.9846567 | 17.4503043 | 20.5893361 |
Breaker (B) | 19.1914739 | 25.6090892 | 19.7009942 | 23.9158162 | 19.788143 | 24.2440957 |
Turning (T) | 8.29093793 | 25.4734785 | 13.1834743 | 22.9724406 | 19.0287402 | 29.0377504 |
Pink (P) | 17.9856155 | 24.126667 | 17.6915075 | 21.1138724 | 17.4557533 | 19.9197693 |
Light red (LR) | 7.38083985 | 24.0121635 | 15.451138 | 21.1648058 | 3.99488988 | 20.7513841 |
Red (R) | 7.35927338 | 24.064192 | 15.9823308 | 21.1106179 | 5.10223285 | 20.3244826 |
Table 4
Inference rules.
Class | Red mean | Green mean | Blue mean | |||
---|---|---|---|---|---|---|
(1) Green (G) | Middle | Middle | Medium high | High | Low | Middle |
(2) Breaker (B) | Middle | High | Medium low | High | Middle | High |
(3) Turning (T) | Low | High | Low | High | Middle | High |
(4) Pink (P) | Low | High | Medium low | Medium high | Low | Middle |
(5) Light red (LR) | Low | Middle | Low | Medium high | Low | High |
(6) Red (R) | Low | High | Low | Medium low | Low | High |
2.9. Defuzzification
Defuzzification was done by equation (11), with the 36 rules of inferences obtained for the modeling of maturity. The Takagi-Sugeno fuzzy model is illustrated in Figure 10;
[figure omitted; refer to PDF]
2.10. Fuzzy System Proposal
Three proposed architectures of the fuzzy systems were evaluated for the fruit identification maturity as shown in Figure 11. These used the means of the RGB channels of the segments associated with the image. In the first architecture, it uses the
[figures omitted; refer to PDF]
To perform the ANFIS’s training, forty samples in the six stages of maturity were used. Table 5 shows the results of the training using 100 epochs of the three proposed models. It can be observed that Model 1 has the lowest training error that is 0.046; this model uses the entries
Table 5
Proposed fuzzy system.
Model | Reference | Input | Number of membership functions (triangular) | Training error (100 epochs) |
---|---|---|---|---|
1 | Proposed in this work |
|
3,4,3 | 0.046 |
2 | [7] | ( |
10 | 1.16 |
3 | [4] |
|
3,3,3,3 | 0.81 |
3. Results
The results were obtained from the models using a set of 20 samples that were not part of the training set, and they are shown in Table 6. By looking Model 1, it can be noticed that it presented an error of
Table 6
Output and error of different classification systems.
Test set class | Model 1 | Model 2 | Model 3 | |||
---|---|---|---|---|---|---|
Output | MSE ( |
Output | MSE | Output | MSE ( |
|
1 | 0.9848 | 230.0 | 4.0274 | 9.1653 | 0.9883 | 135.1337 |
5 | 5.0015 | 2.380 | 4.9622 | 0.0014 | 5.0002 | 0.0831 |
6 | 5.9993 | 0.4.20 | 6.0460 | 0.0014 | 6.0003 | 0.15688 |
3 | 3.0000 | .00042 | 3.5879 | 0.3457 | 2.9999 | 0.0003 |
3 | 2.9995 | .24200 | 3.5936 | 0.3524 | 2.9995 | 0.1605 |
5 | 5.0142 | 203.00 | 4.0408 | 0.9200 | 4.9626 | 1394.9366 |
2 | 1.9989 | 1.1500 | 2.6855 | 0.4699 | 2.0091 | 83.24365 |
4 | 4.0041 | 17.400 | 3.9967 |
|
3.9806 | 372.6216 |
4 | 3.9857 | 17.400 | 4.1620 | 0.0262 | 4.0371 | 1383.8290 |
1 | 0.9848 | 230.00 | 4.0274 | 9.1653 | 0.9883 | 135.1337 |
6 | 5.9998 | 0.0109 | 5.2491 | 0.5637 | 5.9863 | 187.4693 |
2 | 1.9882 | 139.00 | 3.9156 | 3.6697 | 1.9989 | 1.0575 |
3 | 2.9999 | .00185 | 4.1862 | 1.4070 | 2.9999 | 0.0021 |
5 | 4.9954 | 20.600 | 3.0531 | 3.7902 | 5.0375 | 1407.2835 |
5 | 5.0158 | 251.00 | 4.0636 | 0.8767 | 4.9669 | 1093.2020 |
3 | 2.9045 | 9110.0 | 3.8458 | 0.7154 | 2.9958 | 17.0728 |
5 | 4.9946 | 28.600 | 4.2339 | 0.5867 | 5.0047 | 22.7437 |
4 | 3.9909 | 81.200 | 4.1841 | 0.0339 | 4.0414 | 1721.7774 |
4 | 3.9857 | 204.00 | 3.1609 | 0.7040 | 3.9314 | 4702.4978 |
4 | 4.0167 | 204.00 | 4.2202 | 0.0485 | 4.0129 | 167.4634 |
Sum | — | 10739.9 | — | 32.8434 | — | 12825.86 |
Average error | — | 536.995 | — | 1.64217 | — | 641.293 |
4. Discussion
According to the results, Models 1 and 3 correctly classified the set of test samples. Additionally, these presented the sum of the lowest squared errors, the fuzzy system designed for RGB components with an averaged value of
Additionally, Model 3 was a diffuse system that used the averages (
It can be inferred that using the subtraction (R-G) as a descriptor, the fuzzy classifier hided the information of the R and G components, while discarded the blue component. This system presented difficulties in classifying classes 3, 4, and 5; consequently, their efficiency was very low compared with others. The color representation with the components (
In other words, the six tomato’s maturity classification can be reliably done in a RGB color space, mainly due to the nonlinear surfaces created by the fuzzy system or other mathematical functions, which separates each stage. However, the main limitation of the proposed system is that the overall experimentation was carried out in a controlled environment (fixed lighting, fixed distance from the camera to the sample, and a matt black background). This weakness is already considered in the research team, and a proposal will be reported in an upcoming paper.
5. Conclusion
In this work, a CVS was designed using a Raspberry Pi 3, which used tomato maturity degrees according to the USDA criteria with an average error of
One aspect that can be highlighted is the use of the Raspberry Pi 3 and the camera module Raspberry Pi 2, which allowed to create applications of easy technology transfer and rapid implementation focused on the classification of fruit and vegetable maturity. This system can be extended to the CVS’s estimation of soluble solids, vitamins, and antioxidants in tomato.
Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper.
Authors’ Contributions
Marcos Jesús Villaseñor Aguilar contributed to the implementation of the image acquisition system of the tomato samples. Also, he developed the capture and processing system software for the determination of tomato maturity levels. J. Enrique Botello Alvarez contributed to the conceptualization, the design of the vision system experiment, the tutoring, and the supply of study materials, laboratory samples, and equipment. F. Javier Pérez-Pinal contributed to the preparation, creation of the published work, writing of the initial draft, and validation of the results of the vision system. Miroslava Cano-Lara focused on the validation of the vision system of acquisition and of the algorithms. M. Fabiola León Galván focused on the revision of the results in the classification system and in the conceptualization. Micael-Gerardo Bravo-Sánchez contributed to the methodology design, the tutoring, and the establishment of the design of the vision system experiment. Alejandro Israel Barranco Gutierrez led the supervision and responsibility of the leadership for the planning, the execution of the research activity, the technical validation, and the follow-up of the publication of the manuscript.
Acknowledgments
The authors greatly appreciate the support of TecNM, CONACyT, PRODEP, UG, ITESI, and ITESS.
[1] A. Gastélum-Barrios, R. A. Bórquez-López, E. Rico-García, M. Toledano-Ayala, G. M. Soto-Zarazúa, "Tomato quality evaluation with image processing: a review," African Journal of Agricultural Research, vol. 6 no. 14, pp. 3333-3339, 2011.
[2] K. Choi, G. Lee, Y. J. Han, J. M. Bunn, "Tomato maturity evaluation using color image analysis," Transactions of the ASAE, vol. 38 no. 1, pp. 171-176, DOI: 10.13031/2013.27827, 1995.
[3] S. R. Rupanagudi, B. S. Ranjani, P. Nagaraj, V. G. Bhat, "A cost effective tomato maturity grading system using image processing for farmers," Proceedings of 2014 International Conference on Contemporary Computing and Informatics (IC3I),DOI: 10.1109/ic3i.2014.7019591, .
[4] M. A. Vazquez-Cruz, S. N. Jimenez-Garcia, R. Luna-Rubio, L. M. Contreras-Medina, E. Vazquez-Barrios, E. Mercado-Silva, I. Torres-Pacheco, R. G. Guevara-Gonzalez, "Application of neural networks to estimate carotenoid content during ripening in tomato fruits ( Solanum lycopersicum )," Scientia Horticulturae, vol. 162, pp. 165-171, DOI: 10.1016/j.scienta.2013.08.023, 2013.
[5] R. Arias, T.-C. Lee, L. Logendra, H. Janes, "Correlation of lycopene measured by HPLC with the l ∗ , a ∗ , b ∗ color readings of a hydroponic tomato and the relationship of maturity with color and lycopene content," Journal of Agricultural and Food Chemistry, vol. 48 no. 5, pp. 1697-1702, DOI: 10.1021/jf990974e, 2000.
[6] V. Pavithra, R. Pounroja, B. Sathya Bama, "Machine vision based automatic sorting of cherry tomatoes," 2015 2nd International Conference on Electronics and Communication Systems (ICECS), pp. 271-275, DOI: 10.1109/ECS.2015.7124907, .
[7] Y. Takahashi, J. Ogawa, K. Saeki, "Automatic tomato picking robot system with human interface using\nimage processing," IECON’01. 27th Annual Conference of the IEEE Industrial Electronics Society (Cat. No.37243), pp. 433-438, DOI: 10.1109/IECON.2001.976521, .
[8] G. Polder, G. W. A. M. van der Heijden, I. T. Young, "Spectral image analysis for measuring ripeness of tomatoes," Transactions of the ASAE, vol. 45 no. 4, pp. 1155-1161, DOI: 10.13031/2013.9924, 2002.
[9] S. Kaur, A. Girdhar, J. Gill, "Computer vision-based tomato grading and sorting," Advances in Data and Information Sciences, pp. 75-84, DOI: 10.1007/978-981-10-8360-0_7, 2018.
[10] P. Wan, A. Toudeshki, H. Tan, R. Ehsani, "A methodology for fresh tomato maturity detection using computer vision," Computers and Electronics in Agriculture, vol. 146, pp. 43-50, DOI: 10.1016/j.compag.2018.01.011, 2018.
[11] L. Zhang, J. Jia, G. Gui, X. Hao, W. Gao, M. Wang, "Deep learning based improved classification system for designing tomato harvesting robot," IEEE Access, vol. 6, pp. 67940-67950, DOI: 10.1109/access.2018.2879324, 2018.
[12] A. R. Mansor, M. Othman, M. Nazari, A. Bakar, "Regional conference on science, technology and social sciences (RCSTSS 2014)," Business and Social Sciences,DOI: 10.1007/978-981-10-1458-1, 2016.
[13] H. G. Naganur, S. S. Sannakki, V. S. Rajpurohit, R. Arunkumar, "Fruits sorting and grading using fuzzy logic," International Journal of Advanced Research in Computer Engineering and Technology, vol. 1 no. 6, pp. 117-122, 2012.
[14] N. Goel, P. Sehgal, "Fuzzy classification of pre-harvest tomatoes for ripeness estimation – an approach based on automatic rule learning using decision tree," Applied Soft Computing, vol. 36, pp. 45-56, DOI: 10.1016/j.asoc.2015.07.009, 2015.
[15] M. Dadwal, V. K. Banga, "Estimate ripeness level of fruits using RGB color space and fuzzy logic technique," International Journal of Engineering and Advanced Technology (IJEAT), vol. 2 no. 1, 2012.
[16] R. Hasan, S. Muhammad, G. Monir, "Fruit maturity estimation based on fuzzy classification," Proceedings of the 2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), pp. 27-32, DOI: 10.1109/ICSIPA.2017.8120574, .
[17] M. S. Acosta-Navarrete, J. A. Padilla-Medina, J. E. Botello-Alvarez, J. Prado-Olivarez, M. M Perez-Rios, J. J. Díaz-Carmona, L. M. Contreras-Medina, C. Duarte-Galvan, J. R. Millan-Almaraz, A. A. Fernandez-Jaramillo, "Instrumentation and control to improve the crop yield," Biosystems Engineering: Biofactories for Food Production in the Century XXI, pp. 363-400, DOI: 10.1007/978-3-319-03880-3_13, 2014.
[18] A. K. Seema, G. S. Gill, "Automatic fruit grading and classification system using computer vision: a review," 2015 Second International Conference on Advances in Computing and Communication Engineering, pp. 598-603, DOI: 10.1109/ICACCE.2015.15, .
[19] B. Zhang, W. Huang, J. Li, C. Zhao, S. Fan, J. Wu, C. Liu, "Principles, developments and applications of computer vision for external quality inspection of fruits and vegetables: a review," Food Research International, vol. 62, pp. 326-343, DOI: 10.1016/j.foodres.2014.03.012, 2014.
[20] D. Wu, D.-W. Sun, "Colour measurements by computer vision for food quality control--a review," Trends in Food Science & Technology, vol. 29 no. 1,DOI: 10.1016/j.tifs.2012.08.004, 2013.
[21] M. Pagnutti, R. E. Ryan, G. Cazenavette, M. Gold, R. Harlan, E. Leggett, J. Pagnutti, "Laying the foundation to use raspberry pi 3 V2 camera module imagery for scientific and engineering purposes," Journal of Electronic Imaging, vol. 26 no. 1, article 013014,DOI: 10.1117/1.JEI.26.1.013014, 2017.
[22] V. A. Marcos, Á. T. Erik, R. A. Agustín, O. M. Horacio, P. M. José A, "Técnicas de inteligencia artificial para el control de estabilidad de un manipulador paralelo 3RRR," Revista De Ingeniería Eléctrica, Electrónica Y Computación, vol. 11 no. 1, 2013.
[23] B. Gutiérrez, Á. L. Cárdenas, F. P. Pinal, "Implementación de sistema difuso en arduino uno," , . November 2016, https://www.researchgate.net/profile/Alejandro_Barranco_Gutierrez5/publication/309676195_Implementacion_de_sistema_difuso_en_Arduino_Uno/links/581cc82f08ae12715af20b4e/Implementacion-de-sistema-difuso-en-Arduino-Uno.pdf
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2019 Marcos J. Villaseñor-Aguilar et al. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. http://creativecommons.org/licenses/by/4.0/
Abstract
Artificial vision systems (AVS) have become very important in precision agriculture applied to produce high-quality and low-cost foods with high functional characteristics generated through environmental care practices. This article reported the design and implementation of a new fuzzy classification architecture based on the RGB color model with descriptors. Three inputs were used that are associated with the average value of the color components of four views of the tomato; the number of triangular membership functions associated with the components
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details




1 Instituto Tecnológico de Celaya, Celaya 38010, Mexico
2 Departamento de Mecatrónica del ITESI, Irapuato 36698, Mexico
3 Departamento de Alimentos, Universidad de Guanajuato, Mexico
4 Instituto Tecnológico de Celaya, Celaya 38010, Mexico; Cátedras Conacyt, Mexico