This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
The textile industry is the traditional advantageous industry in China’s economic development and is an important livelihood industry. The quality of textiles has a great influence on the textile industry. Fabric defects would reduce the cost and profit by 45%–65% [1]. Therefore, defect detection plays an important role in the control of textile quality. Traditional textile defect testing is usually achieved by training skilled operators with high training costs, and the manual detection efficiency is low (the detection speed is less than 20 m/min). The error and leakage rates are high due to personnel fatigue or other subjective factors. Hence, how to detect fabric defects by automatic means has become an engaging, difficult research spot in the field of textile industry and machine vision.
The core of machine-vision-based fabric defect detection is extracting the characteristics related to defects from the textile images. A detailed review of the machine-vision-based fabric defect detection methods could be found in References [2, 3]. Thomas and Cattoen [4] used the gray-scale means of image rows and columns as defect-related characteristics, which are sensitive to illumination changes. Ye [5] presented the fuzzy inference based on image histogram statistical variables, which is robust to defects’ rotation and translation. However, handling complex image texture is difficult. For complex texture images, researchers proposed methods based on edges [6], local binary patterns [7, 8], contour waves [9], and gray co-occurrence matrix [10, 11]. These methods perform well in identifying defective images but have difficulty recognizing specific fabric defects. Moreover, several researchers used the characteristics of the high-frequency parts, such as Fourier transform methods [12, 13], Gabor filter methods [7, 14], and wavelet transform methods [15, 16]. Compared with fabric defect detection in the spatial domain, it has more space-time overheads in the frequency domain.
Deep learning has been widely used in the fields of computer vision [17–19]. Researchers designed deep neural networks to realize fabric defect detection in a data-driven manner. Liu et al. [20] proposed using multistage GAN the detection of fabric defects through unsupervised data reconstruction. Hence, it could overcome the challenges of diversified fabric defects. Mei et al. [21] introduced a multiscale convolutional denoising autoencoder to learn the reconstruction of textile images. The reconstruction errors are utilized to realize automatic defect detection. Xian et al. [22] studied the problem of metallic surface defect detection that is similar to fabric defect detection. Convolutional neural network-based segmentation is used to detect and recognize defect regions. Wei et al. [23] used faster-RCNN to detect fabric defects automatically. It achieves satisfied detection performance benefiting from faster-RCNN’s strong feature engineering ability. However, faster-RCNN has large space-time complexity due to its two-stage object detection scheme. Jing et al. [24] improved YOLOv3, which is a single-stage object detection method with real-time detection performance. Then, it could better detect fabric defects.
In addition, several researchers studied the model-driven fabric defect detection methods, such as Markov random field [25], autoregression [26], and sparse dictionary [27, 28]. After effective training, these methods could identify small-region defects. However, they are vulnerable to external signals such as noise and light.
In conclusion, many researchers have proposed different methods to study how to detect fabric defects. However, detecting fabric defects is still challenging owing to many kinds of defects with large differences and uneven distributions. These problems lead to the difficulty of designing an effective system to detect and localize the fabric defects automatically. Moreover, the proposed system is required to operate faster and could be realized in an intelligent edge device platform.
According to the above requirements, a lightweight fabric defect detection method is proposed by improving YOLOv5 [29] based on the special needs of the defect detection system. It could detect and recognize special fabric defects in real time. The main contributions of this article are as follows.
A teacher-student architecture is introduced to detect fabric defects. The deep teacher network could precisely recognize fabric defects. After information distillation, a shallow student network could do the same thing in real time with minimal performance degeneration. The student network could be deployed in the edge equipment because of its low space-time overheads.
To solve the problems of many kinds of fabric defects that are difficult to be distinguished, a multitask learning strategy is proposed to detect ubiquitous and specific defects simultaneously. Such a strategy could fully utilize the complementary between ubiquitous and specific defects. Moreover, an attention mechanism is used to enhance the defect-related features.
To handle data imbalance and small-region defects better, the focal loss function [30] is employed to mitigate data imbalance. The center loss is introduced as a constraint to increase the interclass distance while reducing the intraclass distance, hence improving the recognition performance of specific defects.
The proposed method is evaluated on the publicly available Tianchi AI and TILDA databases. The results reveal its ability to detect and recognize specific fabric defects. To verify the generalization capability of the proposed algorithm, it is tested on self-collected fabric defect images and achieves good results.
2. Related Technologies
2.1. Convolutional Neural Networks
Convolutional neural networks (CNNs) are widely used in computer vision tasks [31]. CNN is a kind of feed-forward neural network that contains convolutional computation and deep structure. It has the representation learning ability to learn structured and translation-invariant information from input images. Compared with fully connected operations, CNN has the advantage of small computational overhead. A common CNN-based computer vision system consists of the following parts:
Input layer: it performs gray processing, normalization, and data augmentation on the input images.
Convolutional layer: it performs convolutional operations in each layer to ensure the forward and backward transmission of the information. The feature map of the lth layer is derived from that of the l−1th layer using the convolutional operation, as follows:
Activation layer: it always follows the convolutional layer to introduce nonlinearity. Hence, the network could have better representation learning ability. Commonly used activation functions contain Sigmoid, Tanh, ReLu, and their variants. Figure 1 shows the curves of three different activation functions.
(1) Pooling layer: it is used to subsample the feature maps to decrease computational overheads. It could also mitigate the overfitting phenomenon. Commonly used pooling functions consist of the average and maximum pooling strategies.
(2) Output layer: it presents various structures according to different computer vision applications. For classification tasks, the SoftMax function is often used in the output layer to calculate the probability that the input belongs to each category, thus obtaining the classification results.
[figure omitted; refer to PDF]
The five components above are used in the improved YOLOv5. They would not be introduced in detail in the following sections.
2.2. Object Detection Algorithm
Object detection is one of the essential issues in the field of computer vision. It enables the computer to discover and locate targets of interest from images automatically, such as flaws in the fabric. Deep learning-based object detection algorithms have achieved great successes recently. Commonly used methods include RCNN [32], fast-RCNN [33], faster-RCNN [34], SDD [35], and YOLO [36]. However, the above methods have difficulty meeting the real-time requirements of the fabric defect detection system because they have high computational overheads. To balance precision and speed, a lightweight object detection network, named YOLOv5, is used in this work. The traditional YOLOv5 is improved based on the characteristics of the fabric defects, such that it can be applied to the fabric defect detection system.
Figure 2 demonstrates the structure of the traditional YOLOv5, which mainly includes Bakbone, PANet, and Output. Bakbone is used to perform feature engineering from input images. PANet could obtain visual features robust to scale changes due to the used pyramid structure. The positions are output, and the regions of interest are classified simultaneously. Assuming the input image size as 608
[figure omitted; refer to PDF]
The rest of the student network, including the attention enhancement, multitask learning strategy, and information fusion, are the same with the teacher network.
3.3. Loss Functions
The network is trained in a multitask learning manner, and a weighted combined loss function is presented to optimize the network. The loss functions used consist of the following sections:
(1) The ubiquitous defect detection is termed as a binary classification problem. A cross-entropy loss function LT is used and defined as follows:
(2) The specific defect detection is termed as a multiclass problem. A SoftMax loss function Ls is used and defined as follows:
(3) Considering the sample imbalance in the ubiquitous defect detection head, focal loss function LF is used to mitigate the problem. LF is defined as follows:
(4) To improve the feature discriminability in the specific defect detection head, central loss function LC is employed to increase the interclass distances while reducing the innerclass distances of learned features. LC is defined as follows:
The final loss function of the proposed method is calculated in a weighted manner as follows:
4. Experimental Results
4.1. Databases
One public database comes from the Xuelang Tianchi AI Challenge. It contains 3,331 labeled images with the rectangular locations to label the defects. The number of normal pictures is 2,163, and the number of defective pictures is 1,168. It has 22 kinds of defects, including jumps, knots, stains, puncture holes, and lacking warp. The data distribution on the database shows an unbalanced state in which the number of normal pictures is much higher than the number of defective pictures. Using the same experimental protocol as [19], the specific defect category is reintegrated into puncture hole, knots, rubbing hole, thin spinning, jumps, hanging warp, lacking warp, brushed hole, stains, and others. In experiments, 70% of the entire database is taken as the training set, and the remaining 30% are the test set. Several training samples and their labels are shown in Figure 7.
[figure omitted; refer to PDF]
Another used public database is TILDA, a well-known fabric texture database containing eight kinds of representative fabric categories. Seven error classes and a correct class are defined according to the textile atlas analysis. Similar to [40], 300 fabric images are chosen and are divided into six categories, such as holes, scratch, knots, stain, carrying, and normal. Each class consists of 50 fabric images, and each image is resized to 256 × 256 pixels. In experiments, 70% of the entire database is taken as the training set, and the remaining 30% are the test set. Figure 8 demonstrates several samples and their labels.
[figure omitted; refer to PDF]
Figure 10 shows the location results between the proposed teacher network and the improved YOLOv3 proposed by Jing et al. [24] on the Tianchi AI database. Types of specific defects are labeled under each subfigure for a clearer view. In the subfigure, the green box represents the real defect area, the red box is the positioning result of the proposed teacher network, and the yellow box is the positioning result of the improved YOLOv3. Figure 10 shows that the defect regions predicted by the proposed method are more accurate than those predicted by the improved YOLOv3. Such superiority may be a benefit from the strong YOLOv5 and our improvements. The improved YOLOv3 suffers from positioning small defect areas, although it could detect most defects. For example, it fails to detect the hanging warp and jump defects.
[figure omitted; refer to PDF]
Figure 11 compares the teacher and student networks on self-collected fabric images, specifically, their performance in positioning defect areas. In each subfigure, the green box represents the real defect area, the red box is the positioning result of the teacher network, and the yellow box is the positioning result of the student network. The teacher network could more accurately identify the defect areas. The defect detection performance of the student network is slightly weaker than that of the teacher network. However, the student network has lower space-time overheads; thus, it is more suitable to be arranged for embedded systems.
[figure omitted; refer to PDF]4.4. Quantitative Analysis Results
An ablation study is performed on the Tianchi AI database to verify the effects of different improvement methods, including multitask learning, focal loss, and central loss constraints. The results are presented in Table 1. The ablation study of the teacher network shows that the student network has similar results.
Table 1
Ablation study of the teacher network on the Tianchi AI database.
Attention module | Multitask learning | Focal loss | Central loss constraints | AUC | mAP |
0.938 | 0.403 | ||||
√ | 0.957 | 0.412 | |||
√ | √ | 0.965 | 0.431 | ||
√ | √ | √ | 0.971 | 0.441 | |
√ | √ | √ | 0.973 | 0.442 | |
√ | √ | √ | √ | 0.981 | 0.447 |
Table 1 shows that the teacher network is degraded into traditional YOLOv5 when none of the improvements is used. Compared with the YOLOv5-based detection method, the introduced attention module could lead to an improved performance with increased AUC and mAP. Then, AUC and mAP are further improved by simultaneously detecting ubiquitous and specific defects with the proposed multitask learning strategy because of the complementarity between different tasks. Based on the multitask learning strategy, the introduction of the focal loss function and central loss constraint could further improve the defect detection results. Simultaneously using all improvements achieves the best performance on the Tianchi AI database, which verifies the effects of different improvement methods.
A quantitative comparison between the teacher and student networks is presented in Table 2. The identification times are tested on an Nvidia JETSON TX2. The table shows that the student network could still meet the needs of fabric defect detection, despite the performance degradation observed compared with the teacher network. More importantly, the identification time of the student network is approximately half of the teacher network. Its identification time guarantees the real-time performance on embedded devices.
Table 2
Quantitative comparisons between the teacher and student networks on the Tianchi AI database.
AUC | mAP | Identification times (ms) | |
YOLOv5 | 0.957 | 0.412 | 32 |
Teacher network | 0.981 | 0.447 | 35 |
Student network | 0.952 | 0.406 | 16 |
Finally, comparisons with other mainstream methods are performed to verify the effectiveness of the proposed method. The improved YOLOv3 [24] and the pretrained deep CNN [40] are selected as the fabric defect detection algorithms. Faster-RCNN [34] and YOLOv5 [29] are selected as the universal object detection methods. The comparison results are presented in Table 3.
Table 3
Comparisons of different fabric defect detection algorithms on the Tianchi AI database.
AUC | mAP | |
OurNet [41] | 0.787 | 0.104 |
OurNet-VGG16 | 0.848 | 0.288 |
OurNet-ResNet | 0.882 | 0.311 |
Improved YOLOv3 [24] | 0.927 | 0.372 |
Jing et al. [40] | 0.932 | 0.382 |
YOLOv5 [29] | 0.957 | 0.412 |
Faster-RCNN [34] | 0.956 | 0.413 |
Student network | 0.952 | 0.406 |
Teacher network | 0.981 | 0.447 |
The above table shows that the original OurNet based on AlexNet has poor defect detection performance because it fails to handle small defect areas well. Two variants of OurNet, namely, OurNet-VGG16 and OurNet-ResNet, obtain better performance benefit from extracting better features with deeper structures. Jing et al. [24] achieves better defect detection performance using improved YOLOv3 networks. A pretrained CNN is also beneficial in boosting the defect detection performance as proposed by Jing et al. [40]. YOLOv5 and faster-RCNN achieve similar defect detection performance benefiting from their strong power in object detection. Both methods are superior to the student network proposed in this work, but the time overhead is relatively large. The proposed teacher network achieves the best fabric defect detection performance, whereas the student network provides an alternative to detect fabric defects with acceptable accuracy on embedded devices.
Table 4 presents the comparisons between different methods on the TILDA database. OurNet [41] and its variants perform much better than on the Tianchi AI database because the TILDA database contains fewer categories and equal samples per category. Improved YOLOv3 [24] proposed by Jing et al. [40] achieve similar performance due to the reason discussed above. Similar to the comparisons on the Tianchi AI database, two state-of-the-art detectors, YOLOv5 [29] and faster-RCNN [34], obtained higher AUC and mAP compared with that of the proposed student network. The proposed teacher network still achieves the best defect detection performance, which verifies the accuracy of the proposed method.
Table 4
Comparisons of different fabric defect detection algorithms on the TILDA database.
AUC | mAP | |
OurNet [41] | 0.866 | 0.301 |
OurNet-VGG16 | 0.912 | 0.346 |
OurNet-ResNet | 0.926 | 0.382 |
Jing et al. [40] | 0.958 | 0.411 |
YOLOv5 [29] | 0.970 | 0.442 |
Faster-RCNN [34] | 0.972 | 0.443 |
Student network | 0.965 | 0.428 |
Teacher network | 0.988 | 0.451 |
5. Discussion and Conclusion
An automatic fabric defect detection method based on YOLOv5 is proposed because of the considerable role of fabric defect detection in the textile industry. A teacher-student architecture is used in considering the real-time requirements of the fabric defect detection. The deep teacher network could precisely detect specific fabric defects. After knowledge distillation, the shallow student network could perform fabric defects in real time with an acceptable accuracy. A multitask learning strategy is introduced to detect ubiquitous and specific defects simultaneously, and better utilize the complementarity between different tasks. Focal loss and center loss constraints are introduced for better defect detection performance. Evaluations are performed on the public databases and self-collected fabric images. Comparisons with other mainstream methods indicate that the proposed method is applicable to the automatic detection task of textile defects, which can greatly improve the accuracy and efficiency of defect detection and enhance the automation level of the textile industry.
Authors’ Contributions
All authors have read and agreed to the published version of the manuscript.
Acknowledgments
This research was funded by the National Natural Science Foundation of China under grant no. 51674265.
[1] K. Srinivasan, P. H. Dastoor, P. Radhakrishnaiah, "FDAS: a knowledge-based framework for analysis of defects in woven textile structures," Journal of the Textile Institute Proceedings and Abstracts, vol. 83 no. 3, pp. 431-448, 1990.
[2] A. Rasheed, B. Zafar, A. Rasheed, "Fabric defect detection using computer vision techniques: a comprehensive review," Mathematical Problems in Engineering, vol. 2020,DOI: 10.1155/2020/8189403, 2020.
[3] A. Latif, A. Rasheed, U. Sajid, "Content-based image retrieval and feature extraction: a comprehensive review," Mathematical Problems in Engineering, vol. 2019,DOI: 10.1155/2019/9658350, 2019.
[4] T. Thomas, M. Cattoen, "Automatic inspection of simply patterned material in the textile industry," Proceedings of SPIE: Society of Photo-Optical Instrumentation Engineers, .
[5] Y. Ye, "Fabric defect detection using fuzzy inductive reasoning based on image histogram statistic variables," Proceedings of the 6th International Conference on Fuzzy Systems and Knowledge Discovery, pp. 191-194, .
[6] X. Jia, "Fabric defect detection based on open source computer vision library OpenCV," Proceedings of the 2010 2nd International Conference on Signal Processing Systems, .
[7] J. Jing, H. Zhang, J. Wang, P. Li, J. Jia, "Fabric defect detection using Gabor filters and defect classification based on LBP and Tamura method," Journal of the Textile Institute, vol. 104 no. 1, pp. 18-27, DOI: 10.1080/00405000.2012.692940, 2013.
[8] M. Hao, J. Junfeng, S. Zebin, "Patterned fabric defect detection based on LBP and HOG feature," Journal of Electronic Measurement and Instrument, vol. 32 no. 4, pp. 95-102, 2018.
[9] D. Yapi, M. S. Allili, N. Baaziz, "Automatic fabric defect detection using learning-based local textural distributions in the contourlet domain," IEEE Transactions on Automation Science and Engineering, vol. 15 no. 3, pp. 1014-1026, 2017.
[10] N. T. Deotale, T. K. Sarode, "Fabric defect detection adopting combined GLCM, gabor wavelet features and random decision forest," 3D Research, vol. 10 no. 1,DOI: 10.1007/s13319-019-0215-1, 2019.
[11] M. A. Shabir, M. U. Hassan, X. Yu, "Tyre defect detection based on GLCM and gabor filter," Proceedings of the 2019 22nd International Multitopic Conference (INMIC), .
[12] G. Liu, X. Zheng, "Fabric defect detection based on information entropy and frequency domain saliency," The Visual Computer, vol. 37, 2020.
[13] C. Chi-Ho Chan, G. K. H. Pang, "Fabric defect detection by Fourier analysis," IEEE Transactions on Industry Applications, vol. 36 no. 5, pp. 1267-1276, DOI: 10.1109/28.871274, 2000.
[14] L. Jia, C. Chen, J. Liang, Z. Hou, "Fabric defect inspection based on lattice segmentation and Gabor filtering," Neurocomputing, vol. 238, pp. 84-102, DOI: 10.1016/j.neucom.2017.01.039, 2017.
[15] X. Yang, G. Pang, N. Yung, "Discriminative training approaches to fabric defect classification based on wavelet transform," Pattern Recognition, vol. 37 no. 5, pp. 889-899, DOI: 10.1016/j.patcog.2003.10.011, 2004.
[16] S. Sadaghiyanfam, "Using gray-level-co-occurrence matrix and wavelet transform for textural fabric defect detection: a comparison study," Proceedings of the 2018 Electric Electronics, Computer Science, Biomedical Engineerings' Meeting (EBBT), .
[17] B. Yang, G. Yan, P. Wang, "A novel graph-based trajectory predictor with pseudo-oracle," 2021. https://arxiv.org/abs/2002.00391
[18] J. Wang, P. Fu, R. X. Gao, "Machine vision intelligence for product defect inspection based on deep learning and Hough transform," Journal of Manufacturing Systems, vol. 51, pp. 52-60, DOI: 10.1016/j.jmsy.2019.03.002, 2019.
[19] B. Yang, W. Zhan, P. Wang, "Crossing or not? Context-based recognition of pedestrian crossing intention in the urban environment," IEEE Transactions on Intelligent Transportation Systems, 2021.
[20] J. Liu, C. Wang, H. Su, "Multistage GAN for fabric defect detection," IEEE Transactions on Image Processing, vol. 29, pp. 3388-3400, 2019.
[21] S. Mei, Y. Wang, G. Wen, "Automatic fabric defect detection with a multi-scale convolutional denoising autoencoder network model," Sensors, vol. 18 no. 4,DOI: 10.3390/s18041064, 2018.
[22] T. Xian, D. Zhang, W. Ma, "Automatic metallic surface defect detection and recognition with convolutional neural networks," Applied Sciences-Basel, vol. 8 no. 9, 2018.
[23] B. Wei, K. Hao, X.-S. Tang, L. Ren, "Fabric defect detection based on faster RCNN," Proceedings of the International Conference on Artificial Intelligence on Textile and Apparel, .
[24] J. Jing, D. Zhuo, H. Zhang, "Fabric defect detection using the improved YOLOv3 model," Journal of Engineered Fibers and Fabrics, vol. 15,DOI: 10.1177/1558925020908268, 2020.
[25] P. M. Mahajan, S. R. Kolhe, P. M. Patil, "A review of automatic fabric defect detection techniques," Advances in Computational Research, vol. 1 no. 2, pp. 18-29, 2009.
[26] J. Cao, J. Zhang, Z. Wen, N. Wang, X. Liu, "Fabric defect inspection using prior knowledge guided least squares regression," Multimedia Tools and Applications, vol. 76 no. 3, pp. 4141-4157, DOI: 10.1007/s11042-015-3041-3, 2017.
[27] X. Kang, E. Zhang, "A universal and adaptive fabric defect detection algorithm based on sparse dictionary learning," IEEE Access, vol. 8, pp. 221808-221830, 2020.
[28] J. Zhou, D. Semenovich, A. Sowmya, J. Wang, "Dictionary learning framework for fabric defect detection," Journal of the Textile Institute, vol. 105 no. 3, pp. 223-234, DOI: 10.1080/00405000.2013.836784, 2014.
[29] A. Kuznetsova, T. Maleva, V. Soloviev, "Detecting apples in orchards using YOLOv3 and YOLOv5 in general and close-up images," Proceedings of the International Symposium on Neural Networks, .
[30] T. Y. Lin, P. Goyal, R. Girshick, "Focal loss for dense object detection," Proceedings of the IEEE International Conference on Computer Vision, pp. 2980-2988, .
[31] B. Yang, W. Zhan, N. Wang, X. Liu, J. Lv, "Counting crowds using a scale-distribution-aware network and adaptive human-shaped kernel," Neurocomputing, vol. 390, pp. 207-216, DOI: 10.1016/j.neucom.2019.02.071, 2020.
[32] S. Ren, K. He, R. Girshick, "Object detection networks on convolutional feature maps," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39 no. 7, pp. 1476-1481, 2016.
[33] R. Girshick, "Fast r-cnn," Proceedings of the IEEE International Conference on Computer Vision, pp. 1440-1448, .
[34] S. Ren, K. He, R. Girshick, "Faster r-cnn: towards real-time object detection with region proposal networks," Advances in Neural Information Processing Systems, vol. 28, pp. 91-99, 2015.
[35] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, A. C. Berg, "Ssd: single shot multibox detector," Proceedings of the European Conference on Computer Vision, .
[36] J. Redmon, S. Divvala, R. Girshick, "You only look once: unified, real-time object detection," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779-788, .
[37] A. Vaswani, N. Shazeer, N. Parmar, "Attention is all you need," 2017. https://arxiv.org/abs/1706.03762
[38] F. Locatello, D. Weissenborn, T. Unterthiner, "Object-centric learning with slot attention," 2020. https://arxiv.org/abs/2006.15055
[39] S. Woo, J. Park, J.-Y. Lee, I. S. Kweon, "Cbam: convolutional block attention module," Proceedings of the European Conference on Computer Vision (ECCV), .
[40] J. F. Jing, H. Ma, H. H. Zhang, "Automatic fabric defect detection using a deep convolutional neural network," Coloration Technology, vol. 135 no. 3, pp. 213-223, DOI: 10.1111/cote.12394, 2019.
[41] Z. Wu, Y. Zhuo, J. Li, Y. Feng, B. Han, S. Liao, "A Fast monochromatic fabric defect Fast detection method based on convolutional neural network," Journal of Computer-Aided Design & Computer Graphics, vol. 30 no. 12,DOI: 10.3724/sp.j.1089.2018.17173, 2018.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2021 Rui Jin and Qiang Niu. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/
Abstract
Fabric defect detection is particularly remarkable because of the large textile production demand in China. Traditional manual detection method is inefficient, time-consuming, laborious, and costly. A deep learning technique is proposed in this work to perform automatic fabric defect detection by improving a YOLOv5 object detection algorithm. A teacher-student architecture is used to handle the shortage of fabric defect images. Specifically, a deep teacher network could precisely recognize fabric defects. After information distillation, a shallow student network could do the same thing in real-time with minimal performance degeneration. Moreover, multitask learning is introduced by simultaneously detecting ubiquitous and specific defects. Focal loss function and central constraints are introduced to improve the recognition performance. Evaluations are performed on the publicly available Tianchi AI and TILDA databases. Results indicate that the proposed method performs well compared with other methods and has excellent defect detection ability in the collected textile images.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details


1 School of Computer Science and Technology, China University of Mining and Technology, Xuzhou, China; Changzhou Vocational Institute of Textile and Garment, Changzhou, China
2 School of Computer Science and Technology, China University of Mining and Technology, Xuzhou, China