1. Introduction
Intangible Cultural Heritage (referred to as “ICH” hereinafter) embodies the rich civilization of human society and constitutes an essential component of global cultural diversity. In the era of globalization, the preservation, inheritance, and development of ICH are facing unprecedented challenges, underscoring their immense significance and value. Advanced digital technologies offer new avenues for collecting, processing, storing, and particularly disseminating ICH data, effectively advancing the endeavors of ICH cultural industries and providing diverse means to safeguard and promote ICH culture.
The Qiang ethnic group, an ancient minority in China, primarily resides in the northwest of Sichuan Province. Through their long-standing production practices and daily activities, the Qiang people have created a culturally distinctive art form. Qiang embroidery, a representative cultural expression of the Qiang people, was officially recognized as a national ICH in 2008. The intricate patterns found in Qiang embroidery encapsulate the material and spiritual aspects of the Qiang people’s cultural life, embodying their aesthetic ideals and artistic ingenuity. To be specific, Qiang embroidery patterns derive inspiration from a diverse array of sources, including trees, flowers, fruits, grains, birds, animals, insects, and fish. These patterns encompass the material environment and spiritual beliefs of the Qiang people, commonly found in daily objects, with a predominant presence in clothing decorations. For instance, Qiang embroidery patterns can be categorized into three main groups: totemism, animals and plants motifs, and geometric designs, as illustrated in Fig 1.
[Figure omitted. See PDF.]
Currently, the preservation of Qiang embroidery has encountered certain threats due to various factors, including social development, human activities, environmental changes, and natural disasters. For example, the devastating 2008 Wenchuan earthquake had a profound impact on Qiang ethnic culture, resulting in the destruction of major Qiang settlements and the unfortunate loss of over 80% of cultural inheritors and research experts at the Beichuan Qiang Research Institute. This tragedy also destroyed a substantial amount of materials and electronic data stored within the institute [1]. Hence, this paper explores the state of digital preservation measures by focusing on the stylistic features and modern inheritance dilemma of Qiang embroidery patterns. It elucidates the value of vectorization in digitally preserving Qiang embroidery patterns and underscores the pressing need for novel vectorization techniques in digital image processing for Qiang embroidery. Furthermore, it investigates the feasibility of employing edge detection techniques in the vectorization process of Qiang embroidery patterns. By utilizing the Xception deep learning algorithm for edge detection, this study holds significant implications for the digital preservation and modern inheritance of Qiang embroidery, the support of ICH cultural industry development, and the promotion of relevant ICH cultural database construction [2–5]. In addition, we compared the Xception to the other common edge detection approaches and demonstrated that the Xception achieves the best results. The contributions of this paper includes: (1) The use of the Xception algorithm based on convolutional neural networks effectively solves the problem of extraction the shape of Qiang embroidery in two-dimensional vectorial images, providing a practical solution for processing fine-grained and complex-edge images. (2) It is clearly evident that effective pre-processing techniques play a crucial role in the removal of isolated noise points within blank areas and the enhancement of pixel consistency throughout the entire image. (3) The proposed method offers a reliable practical reference for the preservation of other related intangible cultural heritage images, promoting the application of AI in the field of ICH preservation.
The paper is organized as follows: Section 2 reviews related work on two aspects, the status quo of Qiang embroidery and techniques of image edge detection, and Section 3 describes the Xception model. Our experimental evaluation is presented in Section 4. The conclusion and future work are presented in Section 5.
2. Literature review
2.1. The status Quo of Qiang embroidery patterns preservation
Since its inclusion in the national ICH list in 2008, Qiang embroidery has garnered attention from local governments and the wider public. In recent years, official efforts to support and promote the inheritance and development of Qiang embroidery have increased. These initiatives encompass the establishment of Qiang ethnic cultural ecological protection experimental zones, the creation of ICH experience centers, and the organization of training programs to disseminate skills related to Qiang embroidery and other ICH practices.
In the digital preservation of Qiang embroidery patterns, digital processing plays a crucial role in various stages, including the presentation of digitized resources such as texts, audio, and videos related to Qiang embroidery in the early stage, the classification and integration of digital resources in the storage phase, and the diversity of digital dissemination and display in the later stage [6,7]. Within the entire digital processing stage, the process of vectorization holds significant importance. Typically, the original image resources of Qiang embroidery patterns are in bitmap format. However, bitmap images have limitations such as resolution constraints, susceptibility to distortion during scaling and editing, large file sizes, and the lack of support for transparency. In contrast, vectorized images obtained through vectorization processing offer several advantages. They possess recognizability, editability, and replicability. Vectorized images allow for rapid identification and searching of specific Qiang embroidery patterns, distortion-free editing and scaling, and occupy less file space. This makes them highly suitable for preservation, application, restoration, or modification purposes.
Vectorized Qiang embroidery patterns can be disassembled into pattern parts as needed, enabling rotation, arrangement, repetition, and combination to achieve various types of storage archiving. This provides resource support for the development and utilization of modern creative Qiang embroidery pattern designs. Furthermore, vectorization meets the requirements of digital dissemination in various forms, such as virtual reality and augmented reality interactive displays, and facilitates the sharing of digital resources related to ICH. It also contributes to the promotion and dissemination of multimedia and multi-platform Qiang embroidery patterns.
Currently, most researchers and scholars rely on software such as Adobe Photoshop and Adobe Illustrator for bitmap processing and vector graphic creation when vectorizing traditional patterns [8–10]. However, the workload of image vectorization is substantial and complex. Moreover, manual drawing work is subjective and may not guarantee a completely objective restoration of Qiang embroidery patterns. Therefore, there is an urgent need for innovation in vector graphic processing technology to ensure the inheritance of Qiang embroidery ICH and related culture, as well as the synchronous development of the industry. This innovation would maximize the savings of human, material, and time resources while achieving the early realization of shared digital cultural resources.
2.2. The SOTA of image edge detection
Image edges indicate areas in an image where pixel variations are the most pronounced. They contain valuable inherent information and play a crucial role in extracting image features for image recognition. Edge detection has been a significant research focus since its inception, as it forms a fundamental step in image pre-processing. It serves as a solid foundation for subsequent deep-level image processing and finds applications in various fields such as image segmentation, scene recognition and object detection.
With the advancement of AI algorithms in image processing, CNNs have emerged as a prominent research area, gradually replacing traditional algorithms and achieving remarkable results in edge extraction. However, limited research has been conducted on applying edge extraction techniques to Qiang embroidery. Traditional edge detection algorithms, such as Canny [11], Sobel [12], Roberts operator [13], and Log operator [14], mainly rely on low-level local cues (such as color and texture) for edge extraction. These methods face challenges in accurately representing complex scenes and are often affected by noise and texture clarity, resulting in intermittent and incomplete edge maps. Qiang embroidery patterns are renowned for their intricate details and complex patterns [15]. Experimental findings have indicated that traditional edge extraction algorithms fail to meet the requirements for accurately extracting edges in embroidery patterns. Therefore, it is crucial to enhance traditional algorithms and conduct in-depth research by combining them with CNN methods. For example, Ray etc. [16] introduced an advanced CNN to retain more edge pixels over conventional edge detection algorithms. Inspired by deep learning, in this paper, we aim to extract more high-level semantic information from embroidery images, thereby enhancing the clarity and completeness of the patterns. We would like to harness the power of image edge detection techniques on the Qiang embroidery patterns and explore an innovative idea to preserve such patterns, as well as give empirical practice for other ICH images.
3. Methodology
3.1. Framework
The overall structure of our model is presented in Fig 2. In this paper, Qiang ethnic embroidery patterns can be categorized into two groups: digital images and physical images. However, the generation and transmission process of these images may generate noise, leading to degraded image quality, compromised visual effects and obstacles in subsequent processing. Therefore, image pre-processing plays a crucial role in mitigating these issues. The objective of pre-processing is to smoothen image edges, eliminate jaggedness, reduce noise interference caused by uncertain factors like photography, and enhance image contrast. This process facilitates the extraction of continuous edge contours during edge detection. To optimize the images for edge extraction, we first employ specific pre-processing techniques tailored to different scenes, aiming to enhance the visual effects of the images. These techniques transform the images into a format feature that is more suitable for machine analysis. The process involves selectively emphasizing meaningful information for analysis, suppressing irrelevant details, and maximizing the value of image utilization. Subsequently, the CNN is employed to extract edges from the pre-processed images. Finally, the obtained edge maps undergo edge vectorization using the online conversion tool Autotrace and output the scalable vector graphics(SVG) images. The individual modules will be described in detail later.
[Figure omitted. See PDF.]
3.2. Edge extraction neural network based on Xception
The model in this paper is trained end-to-end, eliminating the need for weight initialization from pre-processing detection models, which is typically required by most deep learning-based edge detectors. However, the deep learning model with a number of layers faces the problem of vanishing gradient. To address the issue of edge features being lost in deeper layers, an Xception-based architecture is adopted in this study. This architecture utilizes parallel connections to capture edge information across different layers. Considering the substantial amount of information contained in images, batch processing of images can significantly increase computational costs. Therefore, factors such as model performance, complexity, computational resources, and data size are taken into account when selecting a model for this paper. A model with fewer than 0.7 million parameters is chosen to strike a balance, ensuring ease of use and deployment in resource-constrained environments while maintaining effective edge extraction. The overall network architecture of the model can be visualized as an edge extraction network with an upsampling sub-network, as illustrated in Figs 2 and 3. The edge extraction structure takes an image as input, undergoes pre-processing, and then passes through different blocks for convolutional processing.
[Figure omitted. See PDF.]
The edge extraction network comprises four output blocks (Block-1 to Block-4), drawing inspiration from the Xception network. Each block consists of sub-blocks containing convolutional layers, and parallel connections connect the blocks and sub-blocks. Each sub-block consists of a stack of two convolutional layers, followed by batch normalization and the ReLU activation function (except for the last convolutional layer in the final sub-block, which lacks this activation).
Due to the numerous convolutions performed, crucial edge features can be lost within each deep block, rendering a single main connection insufficient. To address this, starting from Block-3, the output of each sub-block is averaged using edge connections before being combined with the main connection, as depicted in Fig 2. Following the max pooling operation, the edge connections are configured to average the output of each sub-block. The feature maps generated at each block are then fed back into a separate upsampling network, producing intermediate edge maps. These intermediate edge maps are concatenated to form a stack of learned filters. Finally, at the end of the model, these features are fused into a single edge map.
The upsampling process consists of a conditional stack comprising two blocks (Block-1 and Block-2). Each block consists of a sequence involving a convolutional layer followed by a transpose convolutional layer. Block-1 handles the input with a 1 × 1 kernel size and applies a ReLU activation function. The transpose convolution in Block-1 utilizes a kernel size of s × s, where s represents the scale level of the input feature map. Block-2 is activated when scaling the input feature maps from the initial network. Once this condition is met, the feature map is fed back to Block-1.
3.3. Loss function
A set of predicted edge is obtained as output of . Therefore, the resulting loss function is as follows: from the given RGB image (where m and n represent the size of the image), and the edge output are evaluated using the corresponding ground truth valuesY, denotes the final output of the model, which is generated through data fusion from the initial network. The overall loss function l, is applied to each intermediate edge output (i = 1, 2, 3, 4). It consists of tracing loss , boundary tracing loss , and texture suppression loss
(1)
Where is the weight for the regularized edge loss, is the loss for suppressing texture in each prediction. The final loss is the sum of the losses predicted by each sub-block. The cross-entropy loss is defined as follows:
(2)(3)
Where w is the loss weight, and , represent the positive and negative edge samples in the given ground truth, respectively. Regarding the boundary tracing loss , it is defined as follows:
(4)
Where E is the given reference value, represents the edge map centered at non-edge point p, is the center point of edge and j takes values 1, 2, 3, 4. For the texture suppression loss , it is defined as follows:
(5)
Where is the set that includes all edges and their confounding pixels used in the boundary loss function. serves as a buffer to reduce negative interactions between weak edges and texture regions.
4. Experiments and results
4.1. Data pre-processing
Three datasets have been used for training the proposed model: MDBD [17], BIPED [18], and BRIND [19]. We directly use the pre-trained model for our experiments due to the limited size of training data. We collected the Qiang embroidery patterns data from the Internet, the datasize is 33. Considering the limited availability of texture samples in the application scenario and the effectiveness of traditional denoising methods, such as mean filtering, median filtering, and Gaussian filtering [20,21], we have employed these methods in our study. However, spatial domain denoising techniques often involve a trade-off between noise removal and preserving image details. We have utilized IAMF and non-local mean filtering for denoising purposes on the two types of images respectively. After pre-processing of the image, the color original is converted into a denoised grayscale image for input to the model.
We adapt IAMF and select the sliding window sizes based on the spatial correlation principle of image processing. As the window size increases to its maximum value in the algorithm, IAMF effectively removes noise while preserving fine details in the image edges. For physical image, we chose non-local mean filtering to denoise, it calculates weights based on similarity measures and leverages redundant information in the image. This approach not only reduces noise but also preserves detailed image features to the maximum extent. In Fig 4, we provide two examples of pre-processed images by the use of different pre-processing approaches, which shows the effects of the employed denoising techniques. For both digital images and physical images, choosing a specific denoising approach would improve the quality of feature extraction.
[Figure omitted. See PDF.]
(a) Digital Image; (b) Diagram of the processing result of the IAMF method; (c) Physical Image; (d) Diagram of the processing result of the Non-local mean method.
In addition, it can be observed that the pre-processing edge map reduces noise in the low saturation regions, as shown in Fig 5 (a) and (b), enhancing the continuity and visibility of the texture edge map. It effectively reduces the loss of some isolated edges, as seen in Fig 5 (c) and (d), making the information of texture edges more abundant.
[Figure omitted. See PDF.]
(a) Pre-processing edge of Digital Image; (b) Unpre-processing edge of Digital Image; (c) Pre-processing edge of Physical Image; (d) Unpre-processing edge of Physical Image.
4.2. Edge extraction
In this study, we utilized the Xception deep learning model for edge extraction of Qiang embroidery patterns. As a comparative analysis, we conducted reference experiments using traditional edge detection operators, specifically Roberts, Sobel, Prewitt, and Canny, on both digital images and physical images, as depicted in Figs 6 and 7.
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
From the aforementioned figures, it can be observed that the Roberts operator exhibits sensitivity to noise, rendering it ineffective in eliminating local interference. Furthermore, it struggles to detect and identify weak edges characterized by subtle grayscale differences between the target and the background, resulting in fragmented extracted edges. Additionally, manual threshold setting is required for this operator, limiting its efficacy in extracting various object contours.
The Sobel operator performs well in processing images with grayscale gradients and high noise levels. However, it relies on directional templates for edge extraction, making it less suitable for accurately capturing contours in images with complex textures and diagonal edges. The Prewitt operator shares similar principles and effects with the Sobel operator, but it may yield incomplete edge detection outcomes. Conversely, the Canny operator exhibits reduced susceptibility to noise interference and demonstrates the ability to detect faint edges. Nevertheless, it is sensitive to gradient calculations and necessitates manual adjustment of the Gaussian filter’s variance, potentially leading to missed detections.
To assess the efficacy of edge enhancement in images, we employed two evaluation metrics: image entropy H [22–24] and structural similarity index (SSIM) [25–27]. In Equation 6, denotes the probability density of grey levels; Lis the maximum grey level. Image entropy H quantifies the level of uniformity in grayscale values within an image, with higher values indicating a larger amount of information present. On the other hand, SSIM, ranging from 0 to 1, gauges the preservation of structural and depth information in the image, with higher values indicating better preservation. and denote the luminance mean of image J and image , respectively, in Equation 7. and denote the standard deviation of image J and image , respectively; and are parameter constants. The detailed evaluation results are presented in Tables 1 and 2.
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
(6)(7)
Based on the presented tables, it is evident that the Xception exhibits superior performance compared to other traditional algorithms. It not only yields higher image entropy values but also achieves a minimum 0.1 increase in the structural similarity index (SSIM) compared to the alternative methods. Remarkably, for the physical image, the Xception surpasses the Roberts operator by 0.3, clearly demonstrating its superiority in this specific domain. Based on the Friedman test (p value = 0.05), we provided that the improvement of Xceptions is statistical significant comparing to the other methods. The effectiveness of the Xception can be attributed to the optimization of the loss function and the incorporation of information fusion operations. These enhancements enable the Xception to extract detailed information while effectively representing the overall structural characteristics of the images.
4.3. Deep learning methods
In the comparative experiments, we implemented the HED (Holistically-Nested Edge Detection) algorithm based on the VGG16 network [28]. HED utilizes a multi-scale network architecture that combines feature maps of different resolutions, ranging from fine to coarse. This design allows the network to capture edge information at various scales, including both subtle edge details and broad structural edges, thereby enhancing the accuracy and robustness of edge detection. The purpose of this comparison is to validate the advantages of the Xception algorithm over other deep learning algorithms in extracting image edge information, particularly concerning edge accuracy and detail preservation.
Fig 8 presents a comparison of the edge detection results of the two algorithms on two representative images. The first digital image contains rich texture details and a complex background, while the second real-world image focuses on the object’s contours and fine structures. As illustrated, although the HED algorithm performs well in overall contour detection, it exhibits some breakages and discontinuities when processing fine edges and texture regions, leading to less-than-ideal edge continuity. In contrast, the algorithm based on the Xception architecture not only maintains edge continuity but also better captures the fine edges and texture details of the images. The edge lines are smoother and more precise, especially at the junctions between object edges and complex backgrounds, where its performance is particularly outstanding.
[Figure omitted. See PDF.]
To further quantitatively evaluate the performance of the two algorithms, we adopted the Pratt Figure of Merit (FOM) [29] as the evaluation metric. This metric is primarily used to assess the agreement between the edge detection results and the ideal edges, comprehensively considering the accuracy, continuity, and localization precision of edge detection. The Pratt Figure of Merit is defined as:
(8)
where is the number of predicted edge pixels, is the distance from the ith predicted edge pixel to the nearest true edge pixel, and α is a positive coefficient to adjust the degree of influence of distance.
As shown in Table 3, for both test images, the Xception algorithm significantly outperforms the HED algorithm in terms of the Pratt metric, with improvements exceeding 10%. This result verifies the superiority of the Xception algorithm in edge detection tasks. This advantage is manifested not only in the overall accuracy of edge detection but also in the precise capture and preservation of fine edge details.
[Figure omitted. See PDF.]
4.4. Edge vectorization
The trained network model was employed to extract the edges from Qiang embroidery patterns. Fig 9(a) showcases some example Qiang embroidery patterns, while Fig 9(b) exhibits the edge fusion image generated by the initial network. The experimental result is depicted in Fig 9(c), and subsequently, the vectorized image is obtained using an online vectorization tool, as demonstrated in Fig 9(d).
[Figure omitted. See PDF.]
Based on the experimental results presented in Figs 6–9, it is evident that effective pre-processing techniques contribute to the removal of isolated noise points in blank areas and the enhancement of pixel consistency throughout the image. Training the edge extraction model using the entire image enables the acquisition of rich high-level semantic information. When this information is combined with low-level feature information, the fused edge image demonstrates improved edge continuity and smoothness. Consequently, the generated vectorized image successfully meets the quality requirements for accurately depicting Qiang embroidery patterns.
4.5. Analysis of results
Given the diverse range of Qiang embroidery patterns encountered in daily life, traditional edge detection methods may not be suitable for all types of images. To address this challenge, we adopted an edge detection approach based on the Xception deep learning algorithm. In this study, we conducted experiments on a sequence of images, including digital images (Group (a): images 1 and 2) and physical images (images 3 and 4). As depicted in Fig 9, after applying distinct pre-processing techniques to each image type, notable observations can be made. The fused edge images in Group (b) exhibit edge aliasing and a loss of details in less prominent areas. However, after merging the intermediate edge maps, the final edge images in Group (c) showcase enhanced edge connectivity, effectively minimizing edge fragmentation and misidentification. By converting the resulting edge images into SVG format using an online conversion tool, they can be infinitely scaled, satisfying the quality requirements for accurately depicting Qiang embroidery patterns. These findings carry substantial academic research significance and offer practical value in digitally preserving Qiang embroidery patterns, such as for Qiang cultural promotion posters, brochures, and other tourism-related materials.
5. Conclusions
This paper applies the popular deep learning neural network named Xception to perform vectorized edge extraction on Qiang embroidery patterns. The visual results clearly demonstrate the effectiveness of this approach in addressing the challenging task of vectorizing Qiang embroidery and related ICH two-dimensional images. By utilizing this method, the wider dissemination of Qiang embroidery patterns is facilitated, thereby contributing to the inheritance and development of Qiang embroidery culture. The vectorized Qiang embroidery patterns can be applied and promoted in various dimensions:
1. (1) Educational and entertaining preservation of ICH: The Qiang embroidery patterns extracted through edge detection techniques accurately depict the complete edges and internal contours of the patterns, resembling line drawings. These patterns serve as excellent teaching materials for conveying the visual characteristics and meanings of Qiang embroidery. By integrating ethnic culture into coloring books, these cultural elements can be introduced to school-age children, offering both educational and entertaining experiences. Moreover, it can serve as a stress-relieving activity for adults. Due to their vectorized nature, these patterns can be applied not only in traditional paper media but also in new media platforms such as mobile apps.
2. (2) Innovative reconstruction design incorporating modern styles: By deconstructing the vectorized Qiang embroidery patterns and applying transformations like rotation and scaling, they can be reconstructed in line with modern aesthetic trends. This enables the creation of innovative graphics that embody traditional aesthetic qualities while meeting the demands of contemporary society. These designs can be applied in various cultural and creative derivative products, including fashion design, product packaging, decorative accessories, handicrafts, and more.
3. (3) Cultural promotion through digital media design: Diverse digital media technologies provide new opportunities for the public to gain in-depth knowledge of Chinese ICH The rich foundation of vectorized data can be integrated into emerging cultural and technological formats. It is anticipated that Qiang embroidery will embrace technologies such as holographic projection, virtual reality, augmented reality, etc., to present this intangible art form in a more complete, three-dimensional, vivid, and engaging manner. This will generate public interest and enthusiasm for learning Qiang embroidery, creating new opportunities for its development and survival.
The utilization of edge detection techniques for rapid vectorization of Qiang embroidery patterns not only increases their recognition and acceptance among the general public but also offers new insights for updating vectorization techniques and applying their outcomes to communication and innovative design in other domains of traditional culture and intangible heritage.
Supporting information
S1 File. XED.
https://doi.org/10.1371/journal.pone.0318930.s001
(ZIP)
References
1. 1. Xie H. Investigation and development of Qiang embroidery industry in Beichuan. Journal of Decoration. 2016;12(2016):118–9.
* View Article
* Google Scholar
2. 2. Zhong M, Fan X, Fan P. Qiang costumes and Qiang embroidery. 2012:58–73.
3. 3. Xu J, Zhang G. The origin and analysis of the design of Qiang embroidery pattern. Journal of Silk. 2012;49(07):49–54.
* View Article
* Google Scholar
4. 4. Luo Y. Research and development of Qiang cultural heritage database system. 2018.
5. 5. Min Z, Li Z, Zuo Z, Lv Y, Zhao Q. The theme of Qiang embroidery pattern and its mapping relationship with Qiang culture. Journal of Silk. 2015;52(08):70–4.
* View Article
* Google Scholar
6. 6. Zhan Q, Zhao Z. Yanchuan cloth pile painting peony pattern factor extraction model and application. Journal of Silk. 2020;57(01):101–7.
* View Article
* Google Scholar
7. 7. Fang T, Ai H, Li Q, Zhu C. Study on pattern characteristics and vectorization of Dai brocade branch region. Journal of Silk. 2023;60(02):84–92.
* View Article
* Google Scholar
8. 8. Zhang G. The digital rescue and protection of the western Sichuan minority costumes of the Qiang ethnic. Shanghai: Donghua University Press. 2013:61–196.
9. 9. Zhuoyu Y. Research on traditional pattern vectorization algorithm based on edge structure extraction [D]. Beijing University of Posts and Telecommunications Press. 2020.
10. 10. Zhang S. Research on digital protection of Miao embroidery in western Hunan. 2021.
11. 11. Wang D, Tang C, E S, Gao C, Ge B. Image edge detection based on guided filter Retinex and adaptive Canny. Optics and Precision Engineering. 2021;29(2):443–51.
* View Article
* Google Scholar
12. 12. Zhang Y, Han X, Zhang H. Edge detection algorithm of image fusion based on improved sobel operator. Proceedings of 2017 IEEE 3rd Information Technology and Mechatronics Engineering Conference (ITOEC 2017). 2017:483–7.
* View Article
* Google Scholar
13. 13. Tang Y, Xu Z, Huang X. Research on edge detection of lane image based on roberts operator. Journal of Liaoning University of Technology (Natural Science Edition). 2017;37(6):383–6, 390.
* View Article
* Google Scholar
14. 14. Baloch A, D Memon T, Memon F, Lal B, Viyas V, Jan T. Hardware Synthesize and Performance Analysis of Intelligent Transportation Using Canny Edge Detection Algorithm. IJEM. 2021;11(4):22–32.
* View Article
* Google Scholar
15. 15. Fan J. Research on the application of Qiang ethnic clothing patterns in cultural and creative products. Journal of western leather. 2022;44(14):134–6.
* View Article
* Google Scholar
16. 16. Ray B, Mukhopadhyay S, Hossain S, Ghosal SK, Sarkar R. Image steganography using deep learning based edge detection. Multimed Tools Appl. 2021;80(24):33475–503.
* View Article
* Google Scholar
17. 17. Mély DA, Kim J, McGill M, Guo Y, Serre T. A systematic comparison between visual cues for boundary detection. Vision Res. 2016;120:93–107. pmid:26748113
* View Article
* PubMed/NCBI
* Google Scholar
18. 18. Soria X, Sappa A, Humanante P, Akbarinia A. Dense extreme inception network for edge detection. Pattern Recognit. 2023;139:109461.
* View Article
* Google Scholar
19. 19. Pu M, Huang Y, Guan Q, Ling H. RINDNet: Edge detection for discontinuity in reflectance, illumination, normal and depth. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2021. p. 6879–88.
20. 20. Du Q, Zhang S, Zhang C, Li X, Xiao Y, Li X, et al. Prediction method of mud-water balanced shield tunneling velocity based on mean filter denoising and XGBoost algorithm. Modern Tunnel Technology. 2022;59(06):14–23.
* View Article
* Google Scholar
21. 21. Wu J, Shi L, Du Y, Wen L, Shi Z. Fast cell image segmentation method based on double Gaussian filter. Advances in Laser and Optoelectronics. 2022;59(02):101–9.
* View Article
* Google Scholar
22. 22. Wang T-T, Liang Z-W, Zhang R-X. Importance evaluation method of complex network nodes based on information entropy and iteration factor. Acta Phys Sin. 2023;72(4):048901.
* View Article
* Google Scholar
23. 23. Cheng H, Zhang D, Zhu J, Yu H, Chu J. Underwater target detection utilizing polarization image fusion algorithm based on unsupervised learning and attention mechanism. Sensors. 2023;23(20):5594.
* View Article
* Google Scholar
24. 24. Li Y, Qin Y, Wang H, Xu S, Li S. Study of Texture Indicators Applied to Pavement Wear Analysis Based on 3D Image Technology. Sensors (Basel). 2022;22(13):4955. pmid:35808446
* View Article
* PubMed/NCBI
* Google Scholar
25. 25. Wang W, Hu R, He H, Yang D, Ma X. Intelligent road extraction method of remote sensing image based on structured features. China Space Science and Technology. 2021;41(02):71–6.
* View Article
* Google Scholar
26. 26. Cao J, Qiang Z, Lin H, He L, Dai F. An Improved BM3D Algorithm Based on Image Depth Feature Map and Structural Similarity Block-Matching. Sensors (Basel). 2023;23(16):7265. pmid:37631801
* View Article
* PubMed/NCBI
* Google Scholar
27. 27. Kim T, Bang H. Fractal Texture Enhancement of Simulated Infrared Images Using a CNN-Based Neural Style Transfer Algorithm with a Histogram Matching Technique. Sensors (Basel). 2022;23(1):422. pmid:36617018
* View Article
* PubMed/NCBI
* Google Scholar
28. 28. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. Proceedings of the 3rd International Conference on Learning Representations (ICLR). 2015. https://doi.org/DOIHERE
29. 29. Kumar R, Mali K. A novel approach to edge detection and performance measure based on the theory of “range” and “Bowley’s measure of skewness” in a noisy environment. Journal of Image Processing & Pattern Recognition Progress. 2021;8(1):31–8.
* View Article
* Google Scholar
Citation: Chen A, Peng Y, Li M, Chen H, Liu C, Hu J, et al. (2025) Generate vector graphics of fine-grained pattern based on the Xception edge detection. PLoS One 20(6): e0318930. https://doi.org/10.1371/journal.pone.0318930
About the Authors:
Anqi Chen
Roles: Conceptualization
Affiliation: Chengdu Technological University, Chengdu, China
ORICD: https://orcid.org/0009-0004-4821-5542
Yicui Peng
Roles: Software
Affiliation: School of Computer Science, Chengdu University of Information Technology, Chengdu, China
Meng Li
Roles: Validation
Affiliation: School of Computer Science, Chengdu University of Information Technology, Chengdu, China
Hao Chen
Roles: Methodology
Affiliation: School of Computer Science, Chengdu University of Information Technology, Chengdu, China
Chang Liu
Roles: Validation
Affiliation: College of Computer Science, Sichuan University, Chengdu, China
Jinrong Hu
Roles: Data curation
Affiliation: School of Computer Science, Chengdu University of Information Technology, Chengdu, China
Xiang Wen
Roles: Formal analysis
Affiliation: China Mobile (Chengdu) Industry Research Institute, Chengdu, China
Guo Huang
Roles: Conceptualization
E-mail: [email protected]
Affiliation: Leshan Normal University, Leshan, China
ORICD: https://orcid.org/0000-0001-8109-7833
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
1. Xie H. Investigation and development of Qiang embroidery industry in Beichuan. Journal of Decoration. 2016;12(2016):118–9.
2. Zhong M, Fan X, Fan P. Qiang costumes and Qiang embroidery. 2012:58–73.
3. Xu J, Zhang G. The origin and analysis of the design of Qiang embroidery pattern. Journal of Silk. 2012;49(07):49–54.
4. Luo Y. Research and development of Qiang cultural heritage database system. 2018.
5. Min Z, Li Z, Zuo Z, Lv Y, Zhao Q. The theme of Qiang embroidery pattern and its mapping relationship with Qiang culture. Journal of Silk. 2015;52(08):70–4.
6. Zhan Q, Zhao Z. Yanchuan cloth pile painting peony pattern factor extraction model and application. Journal of Silk. 2020;57(01):101–7.
7. Fang T, Ai H, Li Q, Zhu C. Study on pattern characteristics and vectorization of Dai brocade branch region. Journal of Silk. 2023;60(02):84–92.
8. Zhang G. The digital rescue and protection of the western Sichuan minority costumes of the Qiang ethnic. Shanghai: Donghua University Press. 2013:61–196.
9. Zhuoyu Y. Research on traditional pattern vectorization algorithm based on edge structure extraction [D]. Beijing University of Posts and Telecommunications Press. 2020.
10. Zhang S. Research on digital protection of Miao embroidery in western Hunan. 2021.
11. Wang D, Tang C, E S, Gao C, Ge B. Image edge detection based on guided filter Retinex and adaptive Canny. Optics and Precision Engineering. 2021;29(2):443–51.
12. Zhang Y, Han X, Zhang H. Edge detection algorithm of image fusion based on improved sobel operator. Proceedings of 2017 IEEE 3rd Information Technology and Mechatronics Engineering Conference (ITOEC 2017). 2017:483–7.
13. Tang Y, Xu Z, Huang X. Research on edge detection of lane image based on roberts operator. Journal of Liaoning University of Technology (Natural Science Edition). 2017;37(6):383–6, 390.
14. Baloch A, D Memon T, Memon F, Lal B, Viyas V, Jan T. Hardware Synthesize and Performance Analysis of Intelligent Transportation Using Canny Edge Detection Algorithm. IJEM. 2021;11(4):22–32.
15. Fan J. Research on the application of Qiang ethnic clothing patterns in cultural and creative products. Journal of western leather. 2022;44(14):134–6.
16. Ray B, Mukhopadhyay S, Hossain S, Ghosal SK, Sarkar R. Image steganography using deep learning based edge detection. Multimed Tools Appl. 2021;80(24):33475–503.
17. Mély DA, Kim J, McGill M, Guo Y, Serre T. A systematic comparison between visual cues for boundary detection. Vision Res. 2016;120:93–107. pmid:26748113
18. Soria X, Sappa A, Humanante P, Akbarinia A. Dense extreme inception network for edge detection. Pattern Recognit. 2023;139:109461.
19. Pu M, Huang Y, Guan Q, Ling H. RINDNet: Edge detection for discontinuity in reflectance, illumination, normal and depth. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2021. p. 6879–88.
20. Du Q, Zhang S, Zhang C, Li X, Xiao Y, Li X, et al. Prediction method of mud-water balanced shield tunneling velocity based on mean filter denoising and XGBoost algorithm. Modern Tunnel Technology. 2022;59(06):14–23.
21. Wu J, Shi L, Du Y, Wen L, Shi Z. Fast cell image segmentation method based on double Gaussian filter. Advances in Laser and Optoelectronics. 2022;59(02):101–9.
22. Wang T-T, Liang Z-W, Zhang R-X. Importance evaluation method of complex network nodes based on information entropy and iteration factor. Acta Phys Sin. 2023;72(4):048901.
23. Cheng H, Zhang D, Zhu J, Yu H, Chu J. Underwater target detection utilizing polarization image fusion algorithm based on unsupervised learning and attention mechanism. Sensors. 2023;23(20):5594.
24. Li Y, Qin Y, Wang H, Xu S, Li S. Study of Texture Indicators Applied to Pavement Wear Analysis Based on 3D Image Technology. Sensors (Basel). 2022;22(13):4955. pmid:35808446
25. Wang W, Hu R, He H, Yang D, Ma X. Intelligent road extraction method of remote sensing image based on structured features. China Space Science and Technology. 2021;41(02):71–6.
26. Cao J, Qiang Z, Lin H, He L, Dai F. An Improved BM3D Algorithm Based on Image Depth Feature Map and Structural Similarity Block-Matching. Sensors (Basel). 2023;23(16):7265. pmid:37631801
27. Kim T, Bang H. Fractal Texture Enhancement of Simulated Infrared Images Using a CNN-Based Neural Style Transfer Algorithm with a Histogram Matching Technique. Sensors (Basel). 2022;23(1):422. pmid:36617018
28. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. Proceedings of the 3rd International Conference on Learning Representations (ICLR). 2015. https://doi.org/DOIHERE
29. Kumar R, Mali K. A novel approach to edge detection and performance measure based on the theory of “range” and “Bowley’s measure of skewness” in a noisy environment. Journal of Image Processing & Pattern Recognition Progress. 2021;8(1):31–8.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2025 Chen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Harnessing the power of artificial intelligence(AI) approaches to innovatively generating the vector graphics of fine-grained patterns has become an important task in image edge extraction, particularly on the domain of intangible cultural heritage (ICH) images where they are typically fine-grained and having the complex edges. With higher autonomy, the machine learning algorithms are able to accurately extract the image information, understand and convey the concept contained in it. In this paper, we take Qiang embroidery patterns as an example due to containing fine-grained patterns, which is more suitable for the study of image processing and pattern recognition techniques. We firstly adopt appropriate pre-processing methods, improved adaptive median filtering(IAMF) and non-local mean for the two different types of Qiang embroidery patterns to reduce image noise. Then, the Xception algorithm based on convolutional neural networks(CNNs) is used for edge detection and extraction to generate vector graphics of the patterns. Experimental results show that Qiang embroidery patterns, after denoising and edge extraction, can be clearly identified the shape characteristics of the patterns. Based on this approach, the images can be converted into vector graphics for the digital preservation and further artistic reinterpretation. The use of the Xception algorithm effectively solves the problem of extraction of Qiang embroidery in two-dimensional vectorial images. In addition, our proposed method provides a reliable practical reference for the preservation of other related ICH images.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer