1. Introduction
Electrowetting (EW) is a phenomenon in which an electric field is used to alter the contact angle of a droplet on a solid surface, enabling precise manipulation and regulation of the droplet. In the 1980s, Beni et al. [1] demonstrated the electrowetting effect using mercury droplets and coined the term “electrowetting,” initiating research in this field. In the 1990s, Sondag-Huethorst and Fokkink et al. [2,3] observed the electrowetting effect on sulfide-modified metal electrodes but were still limited by electrolysis. Berge et al. [4] proposed covering the metal electrode with an insulating layer to solve the electrolysis problem, leading to rapid development of electrowetting technology. Electrowetting technology offers several advantages, including fast response time, low power consumption, simple structure, and high integration [5,6,7], making it one of the future directions for display technology. In recent years, significant breakthroughs have been made in electrowetting display technology, achieving long-term stable video display and laying the foundation for industrial production. However, the production process is prone to various imperfections, and defects, such as burn-in, charge trapping, and pixel wall distortion, are frequently observed and documented in numerous experiments, compromising the display quality and economic value [8]. To improve the commercialization of electrowetting display technology, non-destructive testing methods, such as deep learning, must be used to accurately identify and classify defects in electrowetting devices. Statistical analysis should also be conducted to determine the causes and locations of defects and improve and repair device structure and production processes.
Along with the rapid development of electrowetting display technology, the display resolution of electrowetting devices is also increasing year by year. This requires that the electrowetting display defect detection network should have a high ability to detect small defects and at the same time take into account the real-time requirements of defect detection. Therefore, a lightweight network model should be used as the backbone network of the defect detection model, which can improve the detection speed while reducing the network’s parameter and computational complexity, thus reducing the cost of network deployment and operation. At present, there are few studies on electrowetting display defect detection, and most of them are concentrated in the traditional machine vision field. Liao [9] proposed an improved Otsu algorithm, which was optimized for the case where electrowetting images are usually unimodal, improving the performance of algorithm segmentation background and electrowetting display defects. Xiong [10] proposed a histogram-gradient-weighted method that calculates the gradient value of each gray level in the histogram and weights the gray levels based on the gradient value to obtain a new histogram. This method effectively improves the precision, stability, and robustness of detecting electrowetting display defects. In the field of display defect detection, using convolutional neural networks (CNN) for display defect detection has been attracting researchers’ attention. Chang [11] et al. used a convolutional neural network model to perform multi-classification processing on micro-defects of Thin-Film Transistor Liquid Crystal Displays (TFT-LCDs), effectively identifying micro-defects on TFT-LCD panels; Çelik [12] compared RetinaNet [13], M2Det [14], and YOLOv3 [15] networks for detecting pixel-level defects and found that RetinaNet-based architecture provided balanced results in terms of accuracy and time usage. However, these studies did not specifically optimize and improve the characteristics of electrowetting display device defects, resulting in low detection accuracy, slow detection speed, and bulky models. They did not consider the requirements for network detection accuracy, real-time monitoring performance, and generalization performance in industrial applications, which may result in missed or false detections of display defects during production.
At present, there are two primary categories of detection schemes. The first is based on the equivalent capacitance method [16,17], which identifies defects such as electrical damage and non-ideal oil movement in pixel units by monitoring changes in equivalent capacitance values. However, this approach has several limitations, including high detection costs, slow detection speeds, and low accuracy in detecting small target defects. The second category is based on machine vision [18,19]. In research on electrowetting display device defect detection in this field, Chiang et al. successfully identified defects in electrowetting display devices using automatic optical detection and calculated the type of defect. Luo et al. proposed a low-cost drive and detection scheme for detecting defects in electrowetting display devices, successfully detecting multiple electrowetting defects. The method of detecting defects in electrowetting display devices based on machine vision technology has the advantages of low detection cost and fast detection speed, but it still has shortcomings in terms of generalization. The types of electrowetting display device defects that can be detected are limited and cannot be detected in real time.
With the continuous development of deep learning technology, defect detection technology based on deep learning has become one of the mainstream detection methods in industrial production. Deep learning has the characteristics of high accuracy, convenient deployment, and strong robustness. In terms of detection technology, target detection algorithms based on deep learning have been widely used. Common target detection algorithms include Faster R-CNN [20], YOLO [21], SSD [22], etc. These algorithms can effectively detect the location and type of electrowetting display defects in the image and have a certain accuracy and real-time performance. In addition, according to the different requirements of network performance in different detection scenarios, researchers have proposed some improved target detection algorithms, such as Mask R-CNN [23], Cascade R-CNN [24], etc., which have improved in detection accuracy, speed, and multi-scale detection. However, as of now, research on defect detection networks specifically optimized for electrowetting display that defect detection is still in its infancy and has not achieved a balance between detection performance and network lightweighting. This article aims to construct a high-performance electrowetting display defect detection network EW-YOLOv7 (Electrowetting-You Only Look Once Version 7), which is based on the YOLOv7 [25] detection network and makes targeted improvements to the high-precision, low-latency, and lightweight requirements of electrowetting display defect detection.
2. Research on Electrowetting Display Defect Detection Algorithm
The model construction process for this paper is shown in Figure 1. We annotated the original 5040 images to construct a common electrowetting display device defect dataset. The dataset was divided into training, testing, and validation sets in an 8:1:1 ratio. We then used the training and testing sets to train different electrowetting defect detection models and performed ablation experiments on the trained models using the validation set. After evaluating the experimental results, we selected the detection model with the strongest overall detection performance to complete the construction of the electrowetting display device defect detection model.
2.1. Target Detection Algorithm
YOLO series networks are high-performance single-stage object detection models renowned for their excellent detection performance.
This paper proposes an improved YOLO series network: EW-YOLOv7, specifically designed for display defect detection tasks in electrowetting device production, as shown in Figure 2. Based on the latest version of the YOLO series, YOLOv7, this network has been optimized from two aspects: first, to improve its detection capability for small targets in electrowetting display; and, second, to reduce hardware requirements for the detection network.
To achieve these two optimizations, the EW-YOLOv7 network adopts the following strategies: firstly, to improve the representation ability of the network model and reduce the interference of invalid targets on the detection model, we introduce the ACmix attention mechanism into the original network to enhance the network’s ability to extract image features; secondly, to address problems such as large parameter volume, complex computation, and slow detection speed of the original YOLOv7 network, we integrate the EW-GhostNetV2 backbone network module into the original network backbone to reduce network parameters and computation volume, thereby reducing hardware requirements; finally, we use EW-NGWDLoss improved for electrowetting display detection as EW-YOLOv7′s loss function. Thanks to the insensitivity of this loss function to electrowetting display defects, especially small target defects’ position changes, the network’s recognition effect has been improved. Experimental results show that, compared with the original network, improved EW-YOLOv7 has better detection effects for electrowetting display defects in complex environments.
2.2. Introduction of Acmix Attention Mechanism
To improve the accuracy of the EW-YOLOv7 network in detecting electrowetting display defects, this paper introduces the Acmix attention mechanism into the improved EW-YOLOv7 model to enhance the network’s performance in detecting small target defects of electrowetting display.
The Acmix attention mechanism is an attention mechanism that combines self-attention and cross-attention. The former is used to calculate the correlation between each element and other elements in a sequence, while the latter is used to calculate the correlation between elements in different sequences. Combining the two can effectively improve the performance of a network in detecting small target defects of electrowetting display.
Compared with a single self-attention or cross-attention mechanism, Acmix can better capture the correlation between different elements in a sequence, thereby improving the network’s detection performance. The structure of the Acmix module is shown in Figure 3.
Acmix consists of two stages:
The first stage: The electrowetting display defect features are reshaped into pieces through three 1 × 1 convolution operations, and defect sub-features with a size of are obtained.
The second stage: This stage consists of a self-attention path and a convolution path. The self-attention path uses a self-attention mechanism to enhance the expression ability of constructing features while retaining global information. Specifically, the sub-features output by stage one correspond to three electrowetting display defect feature maps with a size of as the query, key, and value of the multi-head self-attention module. After shifting operation, feature fusion, and convolution operation, the electrowetting display defect feature with a size of is obtained. The convolution path uses a convolution layer with a kernel size of k to perform full connection transformation on the sub-features output by stage one, and then performs shifting operation, feature fusion, and convolution operation to obtain the electrowetting display defect feature with a size of .
Finally, the results obtained by processing the two are added together, and the intensity is controlled by two learnable scalar paths The formula is as follows:
(1)
where represents the final output of the Acmix module, represents the output of the self-attention path, represents the output of the convolution attention path, and and are learnable scalars that reflect the model’s bias towards convolution or self-attention at different depths.2.3. Integrating Lightweight Backbone Network Module EW-GhostNetV2
The original YOLOv7 network has problems, such as high network transmission volume, high computational complexity, and slow detection speed, which puts high demands on the performance of edge devices. In the process of industrial deployment, it will undoubtedly increase the cost of network deployment. To address this issue, this paper proposes a lightweight backbone network module EW-GhostNetV2 based on the GhostNetV2 improvement, which is suitable for YOLOv7, to reduce the computational cost of EW-YOLOv7 during detection and improve the model’s inference speed. This module introduces a decoupled fully connected attention mechanism (DFC) based on the fully connected layer on the basis of the original Ghost module. It can be quickly executed on common hardware and can capture the dependence relationship between long-distance pixels, effectively enhancing the extended features generated by cheap operations in the Ghost module.
When implemented using convolution, the theoretical complexity is:
(2)
Here, and , respectively, represent the height and width of the convolution kernel, and and , respectively, represent the width and height of the image.
The input defect feature of the electrowetting display is sent in parallel to the Ghost and DFC branches to obtain the output feature Y and attention matrix A. The results of the two branches are multiplied by each other after using to normalize A:
(3)
Compared with the efficient Ghost module, DFC Attention is not so concise. Directly processing this attention mechanism in parallel with the Ghost module will introduce high computational costs. In general, reducing the height and width of the features to half of their original length will reduce 75% of the FLOPs of DFC Attention. Therefore, downsampling the feature size in the horizontal and vertical directions can improve the speed of DFC Attention execution.
As shown in Figure 4 and Figure 5, EW-GhostNetV2 performs the first Ghost module and DFC Attention in parallel to enhance the extended features and then inputs the enhanced features into the second Ghost module to generate output features. Compared with the inverted bottleneck design of GhostNet using only two Ghost modules, EW-GhostNetV2 captures long-distance dependencies between pixels at different spatial positions, and the model’s expression ability is enhanced.
2.4. Introduction of Normalized Gaussian Wasserstein Distance
The CioU loss function used in the original YOLOv7 is very sensitive to the sensitivity of objects of different sizes. As shown in Figure 6, the predicted box is very sensitive to the offset of small target defects in the electrowetting display. Even a tiny positional change can cause a huge change in IoU. For normal-sized electrowetting display defects, the IoU changes very little after the same size positional change, indicating that the IoU measurement has variability for defects of different scales. For small target defects in electrowetting display, IoU is not a good measurement method. Therefore, we introduce the Normalized Gaussian Wasserstein Distance Loss function in the improved YOLOv7 network to meet the needs of detecting small target defects in electrowetting devices.
2.4.1. Bounding Box Two-Dimensional Gaussian Distribution Modeling
Conventional bounding boxes are represented by rectangles, and their corresponding IoUs focus more on the fit between bounding boxes, which is not suitable for small target defects in electrowetting display. The detection of small target defects in electrowetting displays should pay more attention to the position of the defect center because, for small target defects in electrowetting display, bounding boxes mostly contain background pixels, and background pixels are mostly concentrated around the edges, while the foreground is generally in the middle. To better weight the pixels in the bounding box, the bounding box is modeled as a two-dimensional Gaussian distribution, and its inscribed ellipse is:
(4)
where is the center of the rectangle and is the length and width of the rectangle.The probability density function of the two-dimensional Gaussian distribution is:
(5)
where is the position variable, represents the mean vector and covariance matrix.Under the condition, the equi-value line of the two-dimensional Gaussian distribution can be approximated by an inscribed ellipse. At this time, the bounding box can be modeled as a two-dimensional Gaussian distribution:
(6)
2.4.2. Normalized Gaussian Wasserstein Distance
After completing the two-dimensional Gaussian modeling of the bounding box, the Wasserstein Distance (WD) in the optimal transport theory is used to calculate the distance between the predicted distribution and the true distribution (obtained by transforming the predicted bounding box and true bounding box):
(7)
Substituting the two distributions yields:
(8)
Finally, using exponential normalization, the Normalized Wasserstein Distance (NWD) is obtained:
(9)
where denotes the parameter; the value is related to the dataset, and, in this paper, is taken as the average absolute size of the target in the dataset.2.5. Network Model Training and Evaluation Indicators
This study evaluates the accuracy, detection speed, and network structure complexity of network detection of electrowetting display defects through ablation experiments. The main indicators used to judge the detection performance of the network are detection accuracy (P), recall rate ®, and average AP value (mAP). The calculation formulas are as follows:
(10)
(11)
(12)
Weight size (Weight (MB)), parameter size, and Giga Floating-point Operations Per Second (GFLOPS) are selected as standards to evaluate network lightweighting. In addition, network inference time (ms) is selected as an indicator for evaluating network detection speed.
3. Experimental Results and Analysis
3.1. Dataset Analysis
We acquired and processed images of various electrowetting display devices exhibiting common defects and created a novel dataset: Common Electrowetting Display Device Defect Dataset. We applied data augmentation techniques, such as cropping, adding noise, changing brightness, etc., to the original 560 sample images to generate a total of 5040 electrowetting display device images. The dataset comprises seven categories based on the defect type, as follows:
Functional display device: Figure 7a;
Pixel wall distortion: Figure 7b, voltage alters droplet morphology, resulting in irregular pixel wall dimensions that impair display quality;
Charge trapping: Figure 7c, ions in electrolyte solution accumulate on solid surface under electric field, forming charge layer that diminishes voltage-induced force on droplet, leading to contact angle saturation that constrains electrowetting modulation range;
Conductive layer damage: Figure 7d, current produces heat that causes conductive layer to overheat and burn or melt, compromising electrowetting stability and reliability;
Ink opening: Figure 7e, oil phase and water phase interface instability causes oil phase to separate into small droplets or films that affect display uniformity and clarity;
Ink leakage: Figure 7f, insufficient interfacial tension between oil phase and water phase allows oil phase to escape from fluid chamber, resulting in display malfunction or damage to other components;
Hydrophobic layer deterioration: Figure 7g, prolonged use erodes hydrophobicity of hydrophobic layer, preventing droplets from forming optimal contact angle on it, impairing electrowetting performance.
The resolution of each image is 577 × 488 pixels. Table 1 presents the sample distribution of each category in the dataset. Figure 6 displays the original sample images of each category.
In order to meet the high precision requirements of electrowetting display defects, this study used the Python-OpenCV library to perform data augmentation on different types of defects in the original dataset. By applying data processing operations, such as flipping, rotating, cropping, scaling, and color adjustment, a total of 5040 image samples were obtained. These data augmentation techniques simulate the image quality degradation caused by machine or environmental factors. We use this dataset to train an electrowetting display defect detection network to enhance its detection capabilities and robustness in complex production environments.
3.2. Experimental Results Analysis
3.2.1. Comparison of Verification Results of Different Detection Algorithms
To confirm that the EW-YOLOv7 model proposed in this paper can achieve faster detection accuracy and meet the ability of lower deployment cost, comparative experiments were conducted with other advanced object detectors, such as Faster RCNN, SSD, YOLOv5, YOLOv7, and other models, on common electrowetting display defect datasets. The specific data are shown in the Table 2:
Thanks to the One-Stage structure, SSD and YOLO series network models can complete target localization and classification in one forward propagation, which greatly reduces inference speed and model parameter consumption. Compared with other algorithms that use Two-Stage structures, such as Faster RCNN, they have higher efficiency. Even the poorly performing One-Stage model SSD in comparative experiments has only 39.3% and 30.6% of Param and interface time of Faster RCNN, respectively. In addition, the YOLO series network also adopts a lightweight backbone network model to further reduce redundant calculations and improve feature extraction capabilities. The Param and interface time of the original YOLOv7 network are reduced by 62.2% and 28.9%, respectively, compared with SSD. At the same time, the YOLO series network also has good performance in detecting defects in electrowetting display. Compared with SSD, YOLOv5 and YOLOv7 have increased mAP by 1.3% and 6.7%, respectively, especially when dealing with dense electrowetting display targets and overlapping electrowetting display targets, which can effectively avoid missed detection and false detection problems. However, there is still room for optimization in the detection accuracy of electrowetting display defects, especially small targets, as well as model lightweighting. For the detection task of electrowetting display defects, this paper proposes an EW-YOLOv7 network model based on the existing YOLOv7 network model, which is optimized for targeted improvement of the model while ensuring that the calculation amount is within a controllable range to achieve high recall rate, high detection rate, and fast inference goals. Through experimental evaluation on common electrowetting display defect datasets, the EW-YOLOv7 network model ranks among the top in all indicators, achieving a balance between detection accuracy, detection speed, and network model lightweighting.
We conducted identical experiments on the YOLOv7-tiny version, integrating the same improvement module as EW-YOLOv7 into the tiny version of v7 to produce EW-YOLOv7-tiny (hereafter referred to as EW-tiny). In this trial, EW-tiny’s detection performance was surpassed only by the original YOLOv7 and EW-YOLOv7. Furthermore, compared to the original YOLOv7-tiny model, EW-tiny’s Param and interface time indices were further reduced. If a lightweight electrowetting defect detection model is required, then EW-tiny is undoubtedly the superior choice.
3.2.2. Ablation Experiment
In order to verify the superiority of the algorithm proposed in this paper on the detection effect of electrowetting display defects, ablation experiments were conducted on common electrowetting display defect datasets for each improved module, and the experimental results are shown in the Table 3 and Table 4:
As shown in the table above, integrating the Acmix attention mechanism and NGWD Loss into the YOLOv7 algorithm improves the detection accuracy of the network, which is significantly reflected in the detection accuracy of small target defects.
As demonstrated in Table 5, the integration of the EW-ACmix attention mechanism and NGWD Loss function significantly enhanced the performance of the YOLOv7 model in detecting small and medium target defects in electro-wetted display devices. The accuracy in detecting chapping trapping and degradation defects increased compared to the original network. However, the attention mechanism weakened the model’s ability to detect a wide range of defects, resulting in a 9.7% decrease in accuracy for normal display images when only the EW-ACmix module was integrated. The introduction of the Loss function also increased the network’s parameters, weights, and inference time, impacting its deployment performance. In terms of lightweighting, integrating EW-GhostNetV2 into the v7 model reduced the network’s parameters, GFLOPS, weights, and inference time by 19.3%, 64.3%, 28.7%, and 29.6%, respectively. The DFC attention mechanism improved EW-GhostNetV2′s ability to capture long-range pixel dependencies and detect small targets by 2.7%, 6%, and 7.5% for normal display, chapping trapping, and degradation, respectively. This enhanced the network’s deployment performance while slightly improving its detection accuracy, achieving a balance between model lightness and detection performance.
The three modules exhibit excellent performance in terms of detection accuracy and network lightweighting. By integrating them simultaneously into the network, we obtained the proposed EW-YOLOv7 model, whose detection process is illustrated in Figure 8. The integration of the lightweight GhostNetV2 module reduced the number of network parameters and GFLOPS by 19.2% and 64.3%, respectively, compared to the original YOLOv7 model, significantly decreasing the amount of network computation. The inference time and weight size of EW-YOLOv7 were reduced by 28.9% and 28.4%, respectively, greatly enhancing its deployment capability. In terms of detection accuracy, the optimization of the Acmix attention mechanism and NGWD loss function for detecting defects, particularly small target defects, improved the network’s ability to identify such defects and increased its average detection accuracy by 8.7% compared to the original network. The experimental results demonstrate that EW-YOLOv7 outperforms the original YOLOv7 network in terms of detection accuracy, speed, and lightweight deployment and is well-suited for defect detection in the industrial production process of electrophoretic display devices.
4. Conclusions
This manuscript addresses the current challenge of instability and low precision in the detection of defects in electro-wetted display devices. To provide data support for this field, we have constructed a dataset comprising 5040 sample images that encompass seven major categories of defects in electro-wetted display devices. We propose a lightweight defect detection network, EW-YOLOv7, based on YOLOv7, and have made targeted enhancements by integrating the EW-GhostNetV2 module and EW-ACmix attention mechanism and introducing the NGWD Loss function. These improvements enhance the network’s performance in detecting defects in electro-wetted display devices and its deployment capability. The experimental results demonstrate that EW-YOLOv7 outperforms other mainstream detection networks in terms of accuracy, speed, and model lightweightedness, making it well-suited for deployment in the industrial production process of electro-wetted display devices. Compared to traditional machine-vision-based methods, our deep-learning-based approach exhibits stronger generalization and is less susceptible to environmental factors and image quality. It can identify an unlimited variety of defects in electro-wetted devices and achieve high-speed real-time detection, making it more suitable for deployment in complex industrial production environments. However, there is still room for improvement in the model’s ability to identify small target defects. This could be achieved by adding a small target detection layer to enhance the semantic information obtained from image acquisition. Additionally, our dataset does not yet cover all types of defects in electro-wetted devices; further expansion is necessary to improve the model’s generalization before it can be formally applied to industrial production. In future work, we will continue to refine the model structure by optimizing the multi-scale feature fusion network and enhancing its adaptability to different production environments.
Conceptualization, Z.Z.; Methodology, Z.L.; Formal analysis, N.C.; Investigation, J.W., Z.X. and S.L.; Writing—original draft, Z.Z.; Writing—review and editing, N.C.; Visualization, Z.L.; Funding acquisition, Z.L. All authors have read and agreed to the published version of the manuscript.
The study did not involve humans or animals.
Not applicable.
The data used and/or analyzed during the current study are available from the corresponding author on reasonable request.
The authors declare no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1. Flowchart for the construction of electrowetting display device defect detection model.
Figure 6. The sensitivity analysis of IoU on tiny and normal scale objects. Note that each grid denotes a pixel; box A denotes the ground truth bounding box; boxes B and C denote the predicted bounding boxes with 1 pixel and 4 pixels diagonal deviation, respectively. (a) Tiny scale object; (b) Normal scale object.
Sample distribution of the datasets.
Classes | Total | Train | Test |
---|---|---|---|
Burnt | 720 | 576 | 144 |
Charge Trapping | 720 | 576 | 144 |
Deformation | 720 | 576 | 144 |
Degradation | 720 | 576 | 144 |
Oil Leakage | 720 | 576 | 144 |
Oil Splitting | 720 | 576 | 144 |
Normal | 720 | 576 | 144 |
Total | 5040 | 4032 | 1008 |
Results of different detection models.
Models | Precision | Recall | mAP | Param | Interface Time (ms) |
---|---|---|---|---|---|
Faster RCNN | 0.635 | 0.821 | 0.693 | 250.69 M | 230.4 |
SSD | 0.714 | 0.863 | 0.756 | 98.48 M | 70.6 |
YOLOv5 | 0.766 | 0.856 | 0.769 | 27.56 M | 38.9 |
YOLOv7 | 0.826 | 0.884 | 0.823 | 37.22 M | 50.2 |
YOLOv7-tiny | 0.659 | 0.786 | 0.485 | 6.17 M | 24.3 |
EW-YOLOv7 | 0.869 | 1.000 | 0.895 | 30.07 M | 35.9 |
EW-YOLOv7-tiny | 0.814 | 0.966 | 0.787 | 6.03 M | 20.6 |
Ablation experiment results (detection performance).
Method | ACmix | GhostNetV2 | NGWD | Precision | Recall | mAP |
---|---|---|---|---|---|---|
YOLOv7 | × | × | × | 0.826 | 0.884 | 0.823 |
YOLOv7 | √ | × | × | 0.837 | 1.000 | 0.857 (+4.1%) |
YOLOv7 | × | √ | × | 0.835 | 0.869 | 0.831 (+0.9%) |
YOLOv7 | × | × | √ | 0.816 | 1.000 | 0.842 (+2.3%) |
YOLOv7 | √ | √ | × | 0.874 | 1.000 | 0.868 (+5.4%) |
YOLOv7 | √ | × | √ | 0.833 | 0.943 | 0.882 (+7.1%) |
YOLOv7 | × | √ | √ | 0.863 | 0.912 | 0.845 (+2.6%) |
YOLOv7 | √ | √ | √ | 0.869 | 1.000 | 0.895 (+8.7%) |
Ablation experiment results (model scale).
Method | ACmix | GhostNetV2 | NGWD | Param | Weight |
Interface Time (ms) | GFLOPS |
---|---|---|---|---|---|---|---|
YOLOv7 | × | × | × | 37.22 M | 74.9 | 50.2 | 103.3 |
YOLOv7 | √ | × | × | 38.43 M | 75.6 | 53.6 | 103.3 |
YOLOv7 | × | √ | × | 30.02 M | 53.4 | 35.3 | 36.8 |
YOLOv7 | × | × | √ | 39.22 M | 76.8 | 55.7 | 103.3 |
YOLOv7 | √ | √ | × | 30.08 M | 53.7 | 37.4 | 36.8 |
YOLOv7 | √ | × | √ | 40.27 M | 79.4 | 57.6 | 103.3 |
YOLOv7 | × | √ | √ | 30.04 M | 54.3 | 36.7 | 36.8 |
YOLOv7 | √ | √ | √ | 30.07 M | 53.2 | 35.9 | 36.8 |
Ablation experiment results (normal, charge trapping, degradation).
Method | ACmix | GhostNetV2 | NGWD | AP | ||
---|---|---|---|---|---|---|
Normal | Charge Trapping | Degradation | ||||
YOLOv7 | × | × | × | 0.926 | 0.388 | 0.497 |
YOLOv7 | √ | × | × | 0.829 | 0.657 | 0.746 |
YOLOv7 | × | √ | × | 0.953 | 0.448 | 0.572 |
YOLOv7 | × | × | √ | 0.921 | 0.578 | 0.622 |
YOLOv7 | √ | √ | × | 0.921 | 0.674 | 0.746 |
YOLOv7 | √ | × | √ | 0.879 | 0.679 | 0.783 |
YOLOv7 | × | √ | √ | 0.943 | 0.647 | 0.686 |
YOLOv7 | √ | √ | √ | 0.926 | 0.695 | 0.783 |
References
1. Beni, G.; Hackwood, S. Electro-wetting displays. Appl. Phys. Lett.; 1981; 38, pp. 207-209. [DOI: https://dx.doi.org/10.1063/1.92322]
2. Jackel, J.L.; Hackwood, S.; Veselka, J.J.; Beni, G. Optical waveguide lightmode spectroscopy immunosensors. Appl. Opt.; 1983; 22, pp. 1765-1770. [DOI: https://dx.doi.org/10.1364/AO.22.001765] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/18196029]
3. Sondag-Huethorst, J.A.M.; Fokkink, L.G.J. Electrochemical detection of nitroaromatic compounds using a thin-layer cell with a carbon-fiber working electrode. J. Electroanal. Chem.; 1994; 367, pp. 49-57. [DOI: https://dx.doi.org/10.1016/0022-0728(93)03006-B]
4. Berge, B. Electrocapilarity and wetting of insulator film by water. Comptes Rendus Acad. Sci. Paris Sci. II; 1993; 317, pp. 157-163.
5. Giraldo, A.; Aubert, J.; Bergeron, N.; Li, F.; Slack, A.; van de Weijer, M. Transmissive Electrowetting-Based Displays for Portable Multi-Media Devices. SID Symp. Dig. Tech. Pap.; 2009; 40, 479. [DOI: https://dx.doi.org/10.1889/1.3256820]
6. Ku, Y.S.; Kuo, S.W.; Huang, Y.S.; Chen, C.Y.; Lo, K.L.; Cheng, W.Y.; Shiu, J.W. Single-layered multi-color electrowetting display by using ink-jetprinting technology and fluid-motion prediction with simulation. J. Soc. Inf. Disp.; 2011; 19, 488. [DOI: https://dx.doi.org/10.1889/JSID19.7.488]
7. Heikenfeld, J.; Steckl, A.J. Intense switchable fluorescence in light wave coupled electrowetting devices. Appl. Phys. Lett.; 2005; 86, 011105. [DOI: https://dx.doi.org/10.1063/1.1842853]
8. He, T.; Jin, M.; Eijkel, J.C.; Zhou, G.; Shui, L.L. Two-phase microfluidics in electrowetting displays and its effect on optical performance. Biomicrofluidics; 2016; 10, 011908. [DOI: https://dx.doi.org/10.1063/1.4941843] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26909120]
9. Qinkai, L.; Shanling, L.; Zhixian, L.; Zheliang, C.; Tiantian, L.; Biao, T. Electrowetting defect image segmentation based on improved Otsu method. Opto-Electron Eng.; 2020; 47, 190388. [DOI: https://dx.doi.org/10.12086/oee.2020.190388]
10. Xiong, L.; Liao, Q.; Lin, S.; Lin, Z.; Guo, T. Defect Detection of Electrowetting Display Based on Histogram Gradient Weighting. Laser Optoelectron. Prog.; 2021; 58, 1210003.(In Chinese)
11. Chang, Y.C.; Chang, K.H.; Meng, H.M.; Chiu, H.C. A Novel Multicategory Defect Detection Method Based on the Convolutional Neural Network Method for TFT-LCD Panels. Math. Probl. Eng.; 2022; 2022, 6505372. [DOI: https://dx.doi.org/10.1155/2022/6505372]
12. Çelik, A.; Küçükmanisa, A.; Sümer, A.; Çelebi, A.T.; Urhan, O. A real-time defective pixel detection system for LCDs using deep learning based object detectors. J. Intell. Manuf.; 2022; 33, pp. 985-994. [DOI: https://dx.doi.org/10.1007/s10845-020-01704-9]
13. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. Proceedings of the IEEE International Conference on Computer Vision; Venice, Italy, 22–29 October 2017; pp. 2980-2988.
14. Zhao, Q.; Sheng, T.; Wang, Y.; Tang, Z.; Chen, Y.; Cai, L.; Ling, H. M2det: A single-shot object detector based on multi-level feature pyramid network. Proceedings of the AAAI Conference on Artificial Intelligence; Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 9259-9266.
15. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv; 2018; arXiv: 1804.02767
16. Yi, L.; Biao, T.; Guisong, Y.; Yuanyuan, G.; Linwei, L.; Alex, H. Progress in Advanced Properties of Electrowetting Displays. Micromachines; 2021; 12, 206. [DOI: https://dx.doi.org/10.3390/mi12020206] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33670530]
17. Luo, Z.J.; Luo, J.K.; Zhao, W.W.; Cao, Y.; Lin, W.J.; Zhou, G.F. A high-resolution and intelligent dead pixel detection scheme for an electrowetting display screen. Opt. Rev.; 2018; 25, pp. 18-26. [DOI: https://dx.doi.org/10.1007/s10043-017-0382-3]
18. Chiang, H.-C.; Tsai, Y.-H.; Yan, Y.-J.; Huang, T.-W.; Mang, O.-Y. Oil Defect Detection of Electrowetting Display. Optical Manufacturing and Testing XI; Fähnle, O.W.; Williamson, R.; Kim, D.W. SPIE: Washington, DC, USA, 2015.
19. Luo, Z.; Peng, C.; Liu, Y.; Liu, B.; Zhou, G.; Liu, S.; Chen, N. A Low-Cost Drive and Detection Scheme for Electrowetting Display. Processes; 2023; 11, 586. [DOI: https://dx.doi.org/10.3390/pr11020586]
20. Ren, S.; He, K.; Girshick, R.B.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence; IEEE: New York, NY, USA, 2015; Volume 39, pp. 1137-1149.
21. Redmon, J.; Divvala, S.K.; Girshick, R.B.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Las Vegas, NV, USA, 27–30 June 2016; pp. 779-788.
22. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.E.; Fu, C.; Berg, A.C. SSD: Single Shot MultiBox Detector. European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2015.
23. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R.B. Mask R-CNN. Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV); Venice, Italy, 22–29 October 2017; pp. 2980-2988.
24. Cai, Z.; Vasconcelos, N. Cascade R-CNN: Delving Into High Quality Object Detection. Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition; Salt Lake City, UT, USA, 18–23 June 2018; pp. 6154-6162.
25. Wang, C.; Bochkovskiy, A.; Liao, H.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv; 2022; arXiv: abs/2207.02696
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
In order to overcome the shortcomings of existing electrowetting display defect detection models in terms of computational complexity, structural complexity, detection speed, and detection accuracy, this article proposes an improved YOLOv7-based electrowetting display defect detection model. The model effectively optimizes the detection performance of display defects, especially small target defects, by integrating GhostNetV2 modules, Acmix attention mechanisms, and NGWD (Normalized Gaussian Wasserstein Distance) Loss. At the same time, it reduces the parameter size of the network model and improves the inference efficiency of the network. This article evaluates the performance of an improved model using a self-constructed electrowetting display defect dataset. The experimental results show that the proposed improved model achieves an average detection rate (mAP) of 89.5% and an average inference time of 35.9 ms. Compared to the original network, the number of parameters and computational costs are reduced by 19.2% and 64.3%, respectively. Compared with current state-of-the-art detection network models, the proposed EW-YOLOv7 exhibits superior performance in detecting electrowetting display defects. This model helps to solve the problem of defect detection in industrial production of electrowetting display and assists the research team in quickly identifying the causes and locations of defects.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 College of Information Science and Technology, Zhongkai University of Agriculture and Engineering, Guangzhou 510225, China;
2 College of Information Science and Technology, Zhongkai University of Agriculture and Engineering, Guangzhou 510225, China;