Academic Editor:Patrick De Baets
Laboratory of Oil Analysis, Shanghai Maritime University, 1550 Lingang Avenue, Shanghai 201306, China
Received 30 October 2015; Revised 24 March 2016; Accepted 31 March 2016
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
With the development of complex machines, the demand for effective and economic maintenance schedules has grown rapidly these years. Condition-based maintenance is an idea of using real-time data to prioritize and optimize maintenance resources instead of regular periodic inspection. Observing the state of the system is known as condition monitoring. Such a system will determine the equipment's health and act only when maintenance is actually necessary [1, 2]. Sounds, vibrations, and temperatures are informative features often measured to estimates the machine condition, yet wear particle analysis holds its own advantages [3]. Wear particles (or wear debris) are generated as components move relative to each other. The parameters that define wear particles such as their quantity, shape, and size reflect the wear modes, wear mechanisms, and severity associated with their generation [4]. Wear debris contained in the lubrication oil carry detailed and important information about the condition of the machine [5]. The particle characteristics are sufficiently specific so that the operating wear modes within the machine may be determined, allowing prediction of the imminent behavior of the machine [6, 7]. Wear particles are usually categorized by their wear modes such as adhesive particles or fatigue particles.
In practice, wear particles are usually gathered by ferrography, which uses a high-gradient magnetic field to attract and hold particles from a fluid sample as it flows down a specially prepared microscope substrate [8]. The schematic of ferrography are demonstrated in Figure 1. The ferrography technology allowed wear particles to be observed and analyzed for the first time. In the early days of condition monitoring, practitioners determined the wear condition merely by the amount of wear debris. The development of ferrography provided a wide array of studies that helped identify the characteristics of wear particles and the mechanisms by which they generated. The problem with currently employed ferrography techniques is that the particle morphology assessment, particle classification, and machine status evaluation rely heavily on human expertise, which is time-consuming, costly, and not always reliable [1]. In recent years, tribologists and engineers made considerable efforts to build autoclassification system for imaged wear particles. Such a system would need both tribology research and image processing techniques. Most of the work focused on how to identify wear particles [1]. It is reported that ant-colony algorithm [9], deterministic tourist walking [5], and fuzzy [figure omitted; refer to PDF] -means algorithm [10] can be used to identify wear particles. Also, in [11], attempts had been made to identify adhesive fatigue and abrasive particles using area, perimeter, and elongation parameters; study shows that these simple parameters can be effective for certain types of wear particles such as abrasive particles and sphere like fatigue particles. The same research group also compared the different dimension reduction methods and texture features [12, 13]. In the study by [14], fractal dimension had been used to analyze wear debris surface. In the study by [15], a small online monitoring system for gearbox have been built and gray level and integrated morphological features have been used for wear particles' image segmentation.
Figure 1: Picture of analytical ferrogram and its schematic diagram.
(a) Analytical ferrogram
[figure omitted; refer to PDF]
(b) Deposition pattern on a ferrogram
[figure omitted; refer to PDF]
The possibility of using a pattern recognition system for wear particle analysis without the need of a human expert holds great promise in the condition monitoring industry. Still, there are several problems that need to be resolved. One of the first and most important of them is autosegmentation for wear particles' image. In order to autoclassify wear particles, one must first demarcate the object appearing in digital images. The accuracy of segmentation will directly affect the subsequent feature extraction, classification, and identification of wear particles. Therefore, wear particle segmentation is the vital first step for wear particles' image analysis. However, the existing segmentation methods are not adaptable for complicated ferrograph images for two reasons:
(1) Wear particles deposited on the glass substrate often have complicated background. Large abnormal particles such as severe sliding particles or chunky fatigue particles are main target yet they are surrounded by countless microparticles which often share similar color, making accurate segmentation by threshold technique a hard job.
(2) Since the pictures are microscopic image, the edges of the object are frequently blurred, leading to difficulty in segmentation by edge detection algorithm.
As demonstrated in Figure 2(b), these particles have complicated texture features and their background are filled with microparticles and some of the edges were blurred. Even in the area of computer vision, segmentation is actually one of the central and most difficult practical problems [16, 17]. Yet until recently, not much work has been done. In [18], gray level and integrated morphological features are used to segment the agglomerated particles, yet the background of the image is rather simple and clean; algorithms adaptive to complicated background shall be tested. It is reported in [9] that ant-colony algorithm can be used to complete the task, yet computing efficiency needs to be improved. Li et al. used morphological erosion and dilation operations on binary images; the edge of a single wear particle could be detected with the Laplace operation [19]. Yet how to get accurate binary image in itself is a difficult problem.
Figure 2: Wear particles' images demonstration.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
(c) [figure omitted; refer to PDF]
In this research, attempts have been made to solve the problems mentioned above, improving the result of automatic analysis for ferrography images. We represent the improved JSEG algorithm for wear particles' color images. The object of this study is to see whether this algorithm can be used in particle image segmentation. Parameters were tested according to the characteristics of wear particles. A computer program is developed to preprocess images. The image samples used are generated using tribology testing machines. Experiment results showed that JSEG is suitable for wear particle's image segmentation.
2. Experiment and Method
2.1. Collection of Wear Particle's Images
Wear testing is often carried out to study tribological performance of certain materials or lubricants. In this research, however, it is done for collecting wear particles and building a database of their images.
The adhesive wear testing was carried out on a pin-on-disk machine which consists of a horizontal rotating disc and a deadweight-loaded pin. The stationary pin is stainless steel (hardness 175 HV) and the disk is made of cast iron (300 HV). Pin and disc diameters were 10 mm and 60 mm, respectively. Disc average roughness was Ra = 2.5 μ m. The tests were conducted in a laboratory air, the temperature is 22°C, and relative humidity is 50%. The load is 10 kg so the contact pressure is around 1.27 MPa. The rotating speed is set to 500 rpm and the time is set to 90 minutes, giving an approximate total sliding distance of 7500 m. A solvent resistant PVC ring is fitted over the circumference of the disk to block the wear particles. In order to simulate boundary lubrication, a layer of CD40 lubricant is applied to the disk in the beginning. The experiment shows that the lubrication reduced the friction coefficient from average of 0.4 to no more than 0.2. After the experiment, the disk was washed with acetone and the wear particles were gathered with ferrography.
The fatigue wear particles were generated by four-ball testing machine. The material of the ball is GCr15 (hardness 700 HV). Maximum load and speed were set to 1800 N and 300 r/min. In order to generate fatigue wear particles the time is set to 6 hours. Before the test, the balls should be ultrasonic cleaned in petroleum ether. The particles generated were collected and then ferrography was used to place them on a microscopy glass slide. To expand the database and make the result more robust, particles from real-world machines' lubricant system were also collected using ferrography.
2.2. JSEG Algorithm Introduction
The JSEG algorithm [20] can be separated into two parts. In the first part, colors in the image are quantized to several representative classes that can be used to differentiate regions in the image. This quantization is performed in the color space without considering the spatial distribution of the colors. Then, the image pixel values are replaced by their corresponding color class labels, thus forming a class-map of the image. The class-map can be viewed as a special kind of texture composition. In the next part, spatial segmentation is performed on class-map without considering the corresponding pixel color similarity. It is difficult task to analyze the similarity of the colors and their distributions at the same time. The schematic of the algorithm is shown in Figure 3.
Figure 3: Schematic of the JSEG algorithm for particles' image.
[figure omitted; refer to PDF]
In order to remove noise in color image, filter is used such as vector median filter or directional distance filter. These traditional filters are implemented uniformly across the whole image regardless of the condition each pixel. Thus these filters always modify pixels that are not corrupted by noise. In this research, a nonlinear algorithm called peer group filtering is used to improve the quality of segmentation. Let [figure omitted; refer to PDF] refer to an image pixel vector (in color image it contains three values: red, green, and blue) which characterizes the color information at position [figure omitted; refer to PDF] centered in a [figure omitted; refer to PDF] window. Sort all the pixels in the window according to their distances to [figure omitted; refer to PDF] in ascending order and denote them as [figure omitted; refer to PDF] . The Euclidean distance measure is used here: [figure omitted; refer to PDF]
The peer group [figure omitted; refer to PDF] of size [figure omitted; refer to PDF] for [figure omitted; refer to PDF] is defined as [figure omitted; refer to PDF] The filtering then can be done to the peer group members only instead of the pixels of whole window and thus can avoid edge blurring. The problem left is how to determine the appropriate size [figure omitted; refer to PDF] for each peer group based on local statistics. In this study, a threshold [figure omitted; refer to PDF] is set such that [figure omitted; refer to PDF]
Here, 1D distances [figure omitted; refer to PDF] for Fisher's discriminant estimation [21] are used. The criterion to be maximized is [figure omitted; refer to PDF] where [figure omitted; refer to PDF]
The method calculate [figure omitted; refer to PDF] for each [figure omitted; refer to PDF] and finds the cut-off position where [figure omitted; refer to PDF] is the maximum; that is, [figure omitted; refer to PDF]
But before searching for the cut-off position, an extra calculation is done to remove impulse noise. The following test is performed on the first and the last [figure omitted; refer to PDF] points of [figure omitted; refer to PDF] ; usually [figure omitted; refer to PDF] is set to half of [figure omitted; refer to PDF] ; first-order differences of [figure omitted; refer to PDF] , [figure omitted; refer to PDF] are calculated: [figure omitted; refer to PDF]
If [figure omitted; refer to PDF] , the end points [figure omitted; refer to PDF] for [figure omitted; refer to PDF] or [figure omitted; refer to PDF] are considered as impulse noise and removed. The remaining [figure omitted; refer to PDF] are used to compute the real peer group. After removal of noise and finding cut-off position, the center pixel [figure omitted; refer to PDF] is replaced by the weighted average of its peer group members: [figure omitted; refer to PDF] where [figure omitted; refer to PDF] are the standard Gaussian weights depending on the relative positions of [figure omitted; refer to PDF] with respect to [figure omitted; refer to PDF] .
After peer group filtering, cluster technique is used for color quantization. [figure omitted; refer to PDF] -means start with [figure omitted; refer to PDF] random clusters and then iteratively move items between clusters, minimizing variability between clusters. As demonstrated in Figure 4, Figure 4(a) is original image, Figure 4(b) is images after quantization, and Figure 4(c) is images after both peer group filter and color quantization. The quantization reduces the total kinds of color to only 15, while the texture of the particle retains almost the original style; considering the real-time image should be no more than 400 × 400 (for large number of pixels can make calculation complexity grow to an unacceptable level), the image will lose little information after color quantization. Compared to result of direct quantization, the proposed method smooths the original picture, making it more reliable for subsequent segmentation.
Figure 4: The quantized wear particles' images.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
(c) [figure omitted; refer to PDF]
As mentioned above, the filtering and color quantization and the total variety of color reduce to a very small number. All the pixel colors are replaced by their corresponding color class labels. The newly constructed image of those labels is called a class-map. The class-map in fact reflects the texture composition of the image. Figure 5 gives an example of class-map [20]; for demonstration, there are three labels: red, yellow, and blue. The value of each point (pixel) in the class-map is the image pixel position, a 2D vector [figure omitted; refer to PDF] . Let [figure omitted; refer to PDF] be the set of all [figure omitted; refer to PDF] data points in a class-map. Let [figure omitted; refer to PDF] , let [figure omitted; refer to PDF] , and let [figure omitted; refer to PDF] be the mean: [figure omitted; refer to PDF]
Figure 5: An example of different classes distribution and their corresponding [figure omitted; refer to PDF] .
(a) Class-map 1, [figure omitted; refer to PDF]
[figure omitted; refer to PDF]
(b) Class-map 2, [figure omitted; refer to PDF]
[figure omitted; refer to PDF]
(c) Class-map 3, [figure omitted; refer to PDF]
[figure omitted; refer to PDF]
Suppose [figure omitted; refer to PDF] has [figure omitted; refer to PDF] classes, [figure omitted; refer to PDF] , [figure omitted; refer to PDF] ; let [figure omitted; refer to PDF] be the mean of the [figure omitted; refer to PDF] data points of class [figure omitted; refer to PDF] , [figure omitted; refer to PDF]
Let [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the total variance of points belonging to the same class. Define [figure omitted; refer to PDF]
For the condition of an image consisting of several homogeneous color regions, the color classes are more separated from each other and the value of [figure omitted; refer to PDF] is large. On the other hand, if all color classes are uniformly distributed over the image, the value of [figure omitted; refer to PDF] tends to be small. For instance, three images in Figure 5 all have 3 classes and the number of each class is completely the same. Yet, the last [figure omitted; refer to PDF] is the smallest because the classes are uniformly distributed. Therefore, in a particle's image, inside the particle, texture is unified in some way while outside the particle it is completely different. Moreover, local [figure omitted; refer to PDF] value can be very large near region boundaries. We can now think of constructing an image whose pixel values correspond to these [figure omitted; refer to PDF] values over small windows centered at the pixels. These images are called [figure omitted; refer to PDF] -image. The higher the local [figure omitted; refer to PDF] value is, the more likely the corresponding pixel is near a region boundary.
Given the characteristics of the [figure omitted; refer to PDF] -image, region-growing method is suitable for image segmentation because different regions tend to have similar [figure omitted; refer to PDF] values. A set of initial seed can be set by calculation of the standard deviation of local [figure omitted; refer to PDF] , if certain numbers (usually set to about 1% of the pixels of calculating window) of pixels have [figure omitted; refer to PDF] value less than the average and are connected. They shall be set as a seed. Then the nonseed pixel grows one by one; the one with minimum local [figure omitted; refer to PDF] value is assigned to its adjacent seed until all pixels are assigned [22].
3. Results and Discussion
In this section, the results of the proposed method were presented on two experiments: JSEG's parameter analysis and its comparison with other methods.
To JSEG algorithm, the scale of the window for calculating [figure omitted; refer to PDF] is significant. Figure 6 gives some example of [figure omitted; refer to PDF] -image computing under different scale. Table 1 gives different window size for computing [figure omitted; refer to PDF] values and region size for seed determination. As can be seen in Figure 6, Figure 6(a) is original image and the following images are their [figure omitted; refer to PDF] -images calculated under scales 1, 2, and 3. The smaller the scale is, the more details the [figure omitted; refer to PDF] -image retains. Bigger scale will neglect the small details while edges become rather coarse.
Table 1: Parameter set for different scale.
Scale(pixels) | Window(pixels) | Region size(pixels) | Min. seed (pixels) |
0 | 33 × 33 | 256 × 256 | 512 |
1 | 17 × 17 | 128 × 128 | 128 |
2 | 9 × 9 | 64 × 64 | 32 |
3 | 5 × 5 | 32 × 32 | 8 |
Min. seed refers to the minimum number of seed pixels.
Figure 6: Different [figure omitted; refer to PDF] -image calculation under different scales.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
(c) [figure omitted; refer to PDF]
(d) [figure omitted; refer to PDF]
Since the whole seed determination and seed growing process are based on [figure omitted; refer to PDF] -image, the result of segmentation varies by different scale. As demonstrated in Figure 7, Figure 7(a) refers to the seeds' generation and growth by scale 1, Figure 7(b) refers to scale 2, and Figure 7(c) refers to scale 3. [figure omitted; refer to PDF] -image in scale 3 contains more edged details yet it also leads to oversegmentation. The scratches on the particle's surface were not taken as one unified region. That is because when the calculation window is too small, the [figure omitted; refer to PDF] value of local region rises and falls too quickly, especially when the texture is not so fine.
Figure 7: An example of final segmentation results under 3 scales.
(a) Initial seed area
[figure omitted; refer to PDF]
(b) Regions after initial growing
[figure omitted; refer to PDF]
(c) Segmentation result
[figure omitted; refer to PDF]
Now, We will compare the proposed method with traditional thresholding and edge detection method. A program written by interactive data language is developed to separate the particle from the background. During the process, the edge of the particle was obtained using morphology dilation and erosion operation through human-computer interaction following the steps below:
(i) Set a gray intensity threshold to separate particle [figure omitted; refer to PDF] from the background.
(ii) Using a structure [figure omitted; refer to PDF] perform erosion to the target (particle), one can get [figure omitted; refer to PDF] , the four-neighborhood edge is [figure omitted; refer to PDF]
(iii): Select the edge by region growth.
Some typical examples are shown in Figure 8; Figure 8(a) is edge acquired by thresholding. Thresholding involves separating the dark and light regions of the image and thus identifying dark objects on a light background (or vice versa). When objects are large and do not possess very much surface detail, segmentation can be imagined as splitting the image into a number of regions each having a high level of uniformity in some parameter such as brightness, color, texture, or even motion. In the second example, the thresholding outperformed the proposed method. However, when the texture is complicated and the color intensity of particle and background are similar, thresholding lost its advantage because it cannot make full use of texture's information. Figure 8(b) is edges detected by Sobel operator. Edge provides intrinsically more rigorous means than thresholding for initiating image segmentation, yet the edge we get is not closed and its oversegmentation cannot be resolved unless using Canny operator and yet its parameter set requires human's experience.
Figure 8: The comparison between proposed method and traditional method.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
(c) [figure omitted; refer to PDF]
Generally speaking, JSEG performed well on wear particles' images. The algorithm makes use of both color information and texture information. Yet it still has some limitations; one major problem is shades of wear particles. The shapes are usually part of the particle, yet due to lack of illumination it often causes oversegmentation. However, this problem can be solved by improving hardware, for example, taking several pictures under different illumination and then using them to compose a new picture. Other problem is that when particles with similar texture are attached, the algorithm will not take them as two but one big particle. This problem might be solved by combining watershed algorithm or enhancing edge information.
4. Conclusion
In this work, a segmentation algorithm for wear particle's image is presented. The segmentation consists of color quantization and spatial segmentation. The advantage of this method is that it is fully unsupervised and can be used directly for color images. Different parameters are tested. Experiments showed that, for regular ferrograph image, large scale is more suitable; small scale could lead to over segmentation. Compared to early approaches of image segmentation and edge detection, the proposed method holds great promise in the autoclassification system for wear particles, yet there are still some problems that need to be solved. For instance, the performance is poor when facing accumulated particles, especially when the overlapping particles share similar texture property. To solve this problem, other algorithms such as watershed method or other image properties such as intensive color changes should be taken into account to improve the segmentation result. The further study will focus on solving these problems and will be tested in real-time machine's lubricant system.
[1] G. W. Stachowiak, P. Podsiadlo, "Towards the development of an automated wear particle classification system," Tribology International , vol. 39, no. 12, pp. 1615-1623, 2006.
[2] J. Wang, X. Wang, "A wear particle identification method by combining principal component analysis and grey relational analysis," Wear , vol. 304, no. 1-2, pp. 96-102, 2013.
[3] C. Kowandy, C. Richard, Y.-M. Chen, J.-J. Tessier, "Correlation between the tribological behaviour and wear particle morphology-case of grey cast iron 250 versus Graphite and PTFE," Wear , vol. 262, no. 7-8, pp. 996-1006, 2007.
[4] S. Raadnui, "Wear particle analysis-utilization of quantitative computer image analysis: a review," Tribology International , vol. 38, no. 10, pp. 871-878, 2005.
[5] H. Liu, H. Wei, L. Wei, J. Li, Z. Yang, "An experiment on wear particle's texture analysis and identification by using deterministic tourist walk algorithm," Industrial Lubrication and Tribology , vol. 67, no. 6, pp. 582-593, 2015.
[6] J. A. Williams, "Wear and wear particles-some fundamentals," Tribology International , vol. 38, no. 10, pp. 863-870, 2005.
[7] R. K. Upadhyay, "Microscopic technique to determine various wear modes of used engine oil," Journal of Microscopy and Ultrastructure , vol. 1, no. 3, pp. 111-114, 2013.
[8] B. J. Roylance, "Ferrography-then and now," Tribology International , vol. 38, no. 10, pp. 857-862, 2005.
[9] J. Wang, L. Zhang, F. Lu, X. Wang, "The segmentation of wear particles in ferrograph images based on an improved ant colony algorithm," Wear , vol. 311, no. 1-2, pp. 123-129, 2014.
[10] Z. Yu, O. C. Au, R. Zou, W. Yu, J. Tian, "An adaptive unsupervised approach toward pixel clustering and color image segmentation," Pattern Recognition , vol. 43, no. 5, pp. 1889-1906, 2010.
[11] G. P. Stachowiak, G. W. Stachowiak, P. Podsiadlo, "Automated classification of wear particles based on their surface texture and shape features," Tribology International , vol. 41, no. 1, pp. 34-43, 2008.
[12] G. P. Stachowiak, P. Podsiadlo, G. W. Stachowiak, "Evaluation of methods for reduction of surface texture features," Tribology Letters , vol. 22, no. 2, pp. 151-165, 2006.
[13] G. P. Stachowiak, P. Podsiadlo, G. W. Stachowiak, "A comparison of texture feature extraction methods for machine condition monitoring and failure analysis," Tribology Letters , vol. 20, no. 2, pp. 133-147, 2005.
[14] C. Q. Yuan, J. Li, X. P. Yan, Z. Peng, "The use of the fractal description to characterize engineering surfaces and wear particles," Wear , vol. 255, no. 1-6, pp. 315-326, 2003.
[15] T. Wu, Y. Peng, H. Wu, X. Zhang, J. Wang, "Full-life dynamic identification of wear state based on on-line wear debris image features," Mechanical Systems and Signal Processing , vol. 42, no. 1-2, pp. 404-414, 2014.
[16] X. Liao, H. Xu, Y. Zhou, K. Li, W. Tao, Q. Guo, L. Liu, "Automatic image segmentation using salient key point extraction and star shape prior," Signal Processing , vol. 105, pp. 122-136, 2014.
[17] M. Jungmann, H. Pape, P. Wißkirchen, C. Clauser, T. Berlage, "Segmentation of thin section images for grain size analysis using region competition and edge-weighted region merging," Computers & Geosciences , vol. 72, pp. 33-48, 2014.
[18] T. Wu, H. Wu, Y. Du, N. Kwok, Z. Peng, "Imaged wear debris separation for on-line monitoring using gray level and integrated morphological features," Wear , vol. 316, no. 1-2, pp. 19-29, 2014.
[19] F. Li, C. Xu, G.-Q. Ren, J.-W. Gao, "Image segmentation of ferrography wear particles based on mathematical morphology," Journal of Nanjing University of Science and Technology , vol. 29, no. 1, pp. 70-72, 2005.
[20] Y. Deng, B. S. Manjunath, "Unsupervised segmentation of color-texture regions in images and video," IEEE Transactions on Pattern Analysis and Machine Intelligence , vol. 23, no. 8, pp. 800-810, 2001.
[21] R. O. Duda, P. E. Hart Pattern Classification and Scene Analysis , John Wiley & Sons, New York, NY, USA, 1970.
[22] C. Deng, C. Kenney, M. S. Moore, B. S. Manjunath, "Group filtering and perceptual color image quantization," in Proceedings of the IEEE International Symposium on Circuits and Systems, vol. 4, pp. 21-24, Orlando, Fla, USA, May 1999.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2016 Hong Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
This study aims to use a JSEG algorithm to segment the wear particle's image. Wear particles provide detailed information about the wear processes taking place between mechanical components. Autosegmentation of their images is key to intelligent classification system. This study examined whether this algorithm can be used in particles' image segmentation. Different scales have been tested. Compared with traditional thresholding along with edge detector, the JSEG algorithm showed promising result. It offers a relatively higher accuracy and can be used on color image instead of gray image with little computing complexity. A conclusion can be drawn that the JSEG method is suited for imaged wear particle segmentation and can be put into practical use in wear particle's identification system.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer