In the original publication [1], there was a mistake in Figure 1 as published. The image count in the offline processing section was mistakenly printed as 1119 instead of 1210, and 1486 instead of 1577. These errors were the result of a calculation mistake. The corrected Figure 1 appears below.
In the original publication, there was a mistake in Table 7 as published. The first, second, and last rows were incorrectly printed with the same data as in Table 6. This occurred due to a failure to update these specific rows when applying the template. The corrected Table 7 appears below.
In the original publication, there was a mistake in Figure 11 as published. In the original submission, the caption stated ”The red boxes indicate areas where the leaf and lesion regions are over-segmented, while the yellow boxes highlight areas where the leaf or lesion regions are under-segmented”. However, the figure inadvertently reversed the colors. The corrected Figure 11 appears below.
There was an error in the original publication. The value of 1.24 M was derived by subtracting 1.60 from 2.84 for the YOLOv10n-STC-SE model. However, due to a calculation error, it was incorrectly printed as 1.26 M, and the correct values now reflect the accurate results from our analysis.
A correction has been made to Results and Discussion, Comparative Experiments on Attentional Mechanisms, Paragraph 2:
From the table, it can be observed that with the improvements brought about by the STC module, the F1 score of the baseline model improved by 1.12%. This performance enhancement was mainly attributed to the STC structure’s effective utilization of multi-scale feature information, which enhanced the model’s perceptual ability toward the input image. Additionally, the self-attention mechanism within the STC module strengthened the relationships between cotton leaves, lesions, and the background, achieving better global associations. The parameters and FLOPs were reduced by 1.24 M and 3.8 G, respectively. Building on the STC module, we further introduced the SE, CBAM, CoorAtt, and GAM attention mechanisms to study the impact of hybrid attention mechanisms on the model performance. By incorporating the SE attention mechanism, the F1 and mAP(M) scores increased by 0.85% and 0.8%, respectively, and the model size decreased by 0.44 MB. The CBAM and GAM attention mechanisms showed a decrease in the model segmentation ability, while the CoorAtt attention mechanism provided a slight improvement in the F1 score but was not as effective as the SE mechanism. The SE attention mechanism outperformed the other attention mechanisms. Combining the global self-attention information modeling method of the SW-MSA in the STC with the efficient reweighting transformation method of SENet retained the global feature information brought about by the self-attention mechanism and incorporated the low computational cost characteristics of the SE module’s global context module. This combination improved the model’s instance segmentation capability.
There was an error in the original publication. The statement ‘Table 9 shows that YOLO-VW has notable benefits in the following areas’ was mistakenly printed as Table 8. Additionally, the value 48.8% should have been the result of dividing 1.59 by 3.26, but, due to a calculation error, it was incorrectly printed as 44.2%. The comparison with ’51.8%, 46.6%, and 17.8% of YOLOv9t’ was added due to an earlier omission.
A correction has been made to Results and Discussion, Comparative Experimental Analysis of Different Models, Paragraph 2:
Analysis of the data in the table reveals that YOLO-VW demonstrates significant advantages in terms of the accuracy, lightweight design, and speed. Table 9 shows that YOLO-VW has notable benefits in the following areas: Accuracy: YOLO-VW achieved a segmentation accuracy mAP(M) of 89.2%, which represents improvements of 3.9%, 2.9%, 4.2%, 2.9%, 12.8%, 1.9%, and 2.4% compared to YOLOv5s, YOLOv7-tiny, YOLOv8n, YOLOv9t, SOLOv2, Mask R-CNN, and the baseline model YOLOv10n, respectively. Lightweight Design: In terms of the weight, parameters, and FLOPs, YOLO-VW was compressed to 25.6%, 21.5%, and 30.4% of YOLOv5s; 29.5%, 24.8%, and 33.9% of YOLOv7-tiny; 56.6%, 48.8%, and 65% of YOLOv8n; 51.8%, 46.6%, and 17.8% of YOLOv9t; 2%, 3.4%, and 4% of SOLOv2; and 2.1%, 3.6%, and 3.8% of Mask R-CNN, respectively. Furthermore, compared to the YOLOv10n baseline model, YOLO-VW achieved compression ratios of 64.4%, 56%, and 66.1% of the original size across these three metrics. Detection Speed: YOLO-VW exhibited excellent performance, achieving 157.98 FPS, an improvement of 21.37 FPS over the original model. This reduction in the computation time is attributable to the improvements in the lightweight modules.
There was an error in the original publication. The statement “The results showed that the F1 and mAP(M) of the YOLO-VW model were 88.89% and 89.29%, which were increased by 3.91%, and 2.4%, respectively, compared with the YOLOv10n model. The numbers of parameters and FLOPs were also reduced by 1.59 M and 7.8G, respectively.” contains an error where “89.2%” was mistakenly printed as “89.29%”. This was caused by a printing oversight. Additionally, the preposition “by” should be changed to “to” due to an expression error in the English phrasing.
A correction has been made to Conclusions, Paragraph 1:
Due to the unclear boundary contours of CVW spots and the complex background of leaves, deep learning models are prone to problems such as mis-segmentation, over-segmentation, segmentation boundary errors, and too many parameters, which leads to difficulty in ensuring a lightweight and high accuracy simultaneously. With the aim of solving these problems, in this study, a CVW hazard level assessment system based on improved YOLOv10n was proposed: An improved YOLO-VW model, incorporating improved modules such as STC, GhostConv, SE, and SGD, demonstrated improved detection accuracy while reducing the model parameters and computation. The results showed that the F1 and mAP(M) of the YOLO-VW model were 88.89% and 89.2%, which were increased by 3.91%, and 2.4%, respectively, compared with the YOLOv10n model. The numbers of parameters and FLOPs were also reduced to 1.59 M and 7.8G, respectively. Compared with the YOLOv5s, YOLOv7-tiny, YOLOv8n, YOLOv9t, SOLOv2, and Mask R-CNN models, the YOLO-VW model obtained the greatest accuracy in CVW segmentation with the smallest model size and the most minor parameters. The lightweight CVW hazard level assessment system was deployed in a client-server platform, with an Android smartphone app developed for testing the YOLO-VW and YOLOv10n models; the YOLO-VW model showed a processing time of 2.42 s per image and an accuracy of 85.5%, which was 15% higher than that of the YOLOv10n model.
The authors state that the scientific conclusions are unaffected. This correction was approved by the Academic Editor. The original publication has also been updated.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1 Overall flow of the CVW hazard level assessment.
Figure 11 The examples of the CVW segmentation detection in different environments: (a) original images. From top to bottom: cloudy, sunny, rainy, dusk, nighttime images taken without flash, and nighttime images taken with flash; (b) segmentation results of YOLOv10n; and (c) segmentation results of YOLO-VW. Note: the black regions represent the background, the green regions represent healthy leaves, and the red regions represent lesions. The red boxes indicate areas where the leaf and lesion regions are over-segmented, while the yellow boxes highlight areas where the leaf or lesion regions are under-segmented.
Comparison results of different optimizers.
Model | Optimizer | P | R | F1 | mAPM@0.5 | Weight/MB | Parameters/M | FLOPs/G |
---|---|---|---|---|---|---|---|---|
YOLO-VW | Adam | 90.4 | 85.3 | 87.78 | 88.3 | 3.70 | 1.59 | 7.8 |
AdamW | 89.0 | 85.0 | 86.95 | 88.3 | 3.70 | 1.59 | 7.8 | |
SGD | 92.1 | 85.9 | 88.89 | 89.2 | 3.69 | 1.59 | 7.8 |
Reference
1. Liao, J.; He, X.; Liang, Y.; Wang, H.; Zeng, H.; Luo, X.; Li, X.; Zhang, L.; Xing, H.; Zang, Y. A Lightweight Cotton Verticillium Wilt Hazard Level Real-Time Assessment System Based on an Improved YOLOv10n Model. Agriculture; 2024; 14, 1617. [DOI: https://dx.doi.org/10.3390/agriculture14091617]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Details

1 College of Engineering, South China Agricultural University, Guangzhou 510642, China; [email protected] (J.L.); [email protected] (X.H.); [email protected] (Y.L.); [email protected] (H.W.); [email protected] (H.Z.); [email protected] (X.L.), Key Laboratory of Key Technology on Agricultural Machine and Equipment (South China Agricultural University), Ministry of Education, Guangzhou 510642, China, Guangdong Provincial Key Laboratory of Agricultural Artificial Intelligence (GDKL-AAI), Guangzhou 510642, China
2 College of Engineering, South China Agricultural University, Guangzhou 510642, China; [email protected] (J.L.); [email protected] (X.H.); [email protected] (Y.L.); [email protected] (H.W.); [email protected] (H.Z.); [email protected] (X.L.)
3 College of Mechanical and Electrical Engineering, Zhongkai University of Agriculture and Engineering, Guangzhou 510225, China; [email protected]
4 College of Agriculture, South China Agricultural University, Guangzhou 510642, China; [email protected]
5 School of Information Technology & Engineering, Guangzhou College of Commerce, Guangzhou 511363, China