Full text

Turn on search term navigation

© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Accurately classifying the quality of apples is crucial for maximizing their commercial value. Deep learning techniques are being widely adopted for apple quality classification tasks, achieving impressive results. While existing research excels at classifying apple variety, size, shape, and defects, color and deformity analysis remain an under-explored area. Therefore, this study investigates the feasibility of utilizing convolutional neural networks (CNN) to classify the color and deformity of apples based on machine vision technology. Firstly, a custom-assembled machine vision system was constructed for collecting apple images. Then, image processing was performed to extract the largest fruit diameter from the 45 images taken for each apple, establishing an image dataset. Three classic CNN models (AlexNet, GoogLeNet, and VGG16) were employed with parameter optimization for a three-category classification task (non-deformed slice–red apple, non-deformed stripe–red apple, and deformed apple) based on apple features. VGG16 achieved the best results with an accuracy of 92.29%. AlexNet and GoogLeNet achieved 91.66% and 88.96% accuracy, respectively. Ablation experiments were performed on the VGG16 model, which found that each convolutional block contributed to the classification task. Finally, prediction using VGG16 was conducted with 150 apples and the prediction accuracy was 90.50%, which was comparable to or better than other existing models. This study provides insights into apple classification based on color and deformity using deep learning methods.

Details

Title
Classification of Apple Color and Deformity Using Machine Vision Combined with CNN
Author
Qiu, Dekai 1 ; Guo, Tianhao 1 ; Yu, Shengqi 1 ; Liu, Wei 1   VIAFID ORCID Logo  ; Li, Lin 2 ; Sun, Zhizhong 3 ; Peng, Hehuan 1 ; Hu, Dong 4   VIAFID ORCID Logo 

 College of Optical, Mechanical and Electrical Engineering, Zhejiang A&F University, Hangzhou 311300, China; [email protected] (D.Q.); [email protected] (T.G.); [email protected] (S.Y.); [email protected] (W.L.) 
 School of Agricultural Engineering, Jiangsu University, Zhenjiang 212013, China; [email protected]; Key Laboratory of Modern Agricultural Equipment and Technology, Jiangsu University, Ministry of Education, Zhenjiang 212013, China 
 College of Chemistry and Materials Engineering, Zhejiang A&F University, Hangzhou 311300, China; [email protected]; College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou 310058, China 
 College of Optical, Mechanical and Electrical Engineering, Zhejiang A&F University, Hangzhou 311300, China; [email protected] (D.Q.); [email protected] (T.G.); [email protected] (S.Y.); [email protected] (W.L.); School of Agricultural Engineering, Jiangsu University, Zhenjiang 212013, China; [email protected]; Key Laboratory of Modern Agricultural Equipment and Technology, Jiangsu University, Ministry of Education, Zhenjiang 212013, China 
First page
978
Publication year
2024
Publication date
2024
Publisher
MDPI AG
e-ISSN
20770472
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3084712221
Copyright
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.