Content area
Aluminium metal matrix composites are lightweight, corrosion-resistant, and extremely durable. Because of their low mass density, stiffness, and high specific strength, aluminium alloys with ceramic-reinforced particles are more appealing in aircraft, transportation, and industrial applications. This piece of work illustrates an image fusion approach using discrete wavelet transform (DWT) for the detection of grains present in the hybrid composite to study the metallographic characterization. The fusion approach combines the same composite's images with different resolutions and intensities acquired by scanning electron microscope to produce an integrated image that is more suited for identifying grains and grain boundaries that are difficult to locate from images in other modalities. Some statistical evaluation measures are used to investigate the effectiveness and significance of the suggested fusion technique. The statistical measure’s indicate that the recommended methodology is commendable. According to the statistical analysis, the proposed fusion process successfully retains the maximal content of visual truth in material characterization, allowing for faster and more accurate metallographic characterization of hybrid composites.
Article Highlights
Enhanced Visualization: The DWT image fusion technique improves the visibility of grains and grain boundaries in hybrid composites, making them easier to identify than in individual images.
Improved Accuracy and Faster Analysis: The fused image enhances the accuracy of metallographic characterization, allowing for more precise analysis of the composite's microstructure. The fusion process streamlines the characterization process, leading to quicker analysis times compared to traditional methods.
Statistical Support: Statistical evaluation measures demonstrate the effectiveness and significance of the proposed fusion approach, confirming its ability to retain valuable information from the original images.
Introduction
Researchers have developed automated computerized image analysis methods in recent years that can more quickly and accurately perform microstructural imaging of a variety of materials. Transmission electron microscopic structures, optical microscopic structures, and scanning electron microscopic structures can all be processed using this technique. Using a number of statistical techniques, automated digital image analysis technologies analyze the image and determine the size of the grain. This makes it easier to detect dislocations and different stages [1, 2, 3–4]. The impact of computer-based image processing technologies is larger than traditional human methods, which are frequently slow and inaccurate. By lowering errors, these methods not only make microstructure processing simpler and faster but also more accurate and predictive. Multiple test samples were processed simultaneously thanks to improvements in computer-based image processing technologies. As the detection threshold rises, the findings’ effectiveness rises as well [5, 6]. This method turned the hazy digital microstructure into a clear and usable digital form by using filters and edge-detecting operators. The digital image processing technique's processing stages are essential for measuring the size of grains of different materials. Researchers also discovered advantages to digital image processing, but more study is required to create an efficient digital image processing method.
The goal of this work was to find a suitable answer for the material scientist when choosing a digital image processing technique. The importance of edge detection based on DWT in addition to the application of image fusion has been highlighted in this article. In order to process the digital image of Al-Al2O3-WS2, a pixel-level image fusion approach is used to acquire the highly informative hybrid composite image and thereafter DWT-based edge detection approach is used to retrieve the distinct grain boundaries. Aluminum-based hybrid composites are composites with various reinforcements that have a lot of potential in the automotive and aerospace industries because of their excellent tribological qualities [7, 8–9].
One key issue relating to the matrix phase and reinforcements of the composite is frequently present in the fabrication processes for hybrid composites. Grain size and grain boundary significantly affect attributes like wear resistance in hybrid composites, which could help reduce fuel consumption [10, 11, 12–13]. In order to anticipate the mechanical and tribological features of self-lubricating Al-Al2O3-WS2 composites, efforts are made in this study to find an appropriate operator for edge detection so that the grain boundaries and size of the grain can be estimated.
Methodology
Hybrid composite fabrication
Powder metallurgy was used to make hybrid composites made of aluminum that were reinforced with WS2 and Al2O3. The sample is prepared by utilizing pure aluminum powder with an average particle size of 80 µm, Al2O3 powder with an average particle size of 40 µm, and WS2 powder with an average particle size of 10 µm.
Al-Al2O3-WS2 is produced using three main methods: powder mixing, compaction at the correct pressure of 620 MPa, and sintering at the appropriate temperature of 600 °C with air cooling. To accomplish uniform reinforcement distribution in a matrix without particle agglomeration, a mechanical stirrer was used. Finally, microstructural analysis of the sintered pellets involved sampling, abrasive machining, polishing, and etching (with Keller's reagent). The samples were examined using a scanning electron microscope, with the goal of capturing digital images at various magnifications in order to locate the grain boundary and determine grain size.
Image fusion using DWT
The act of combining complementary information from several images of the same scene to create a new image that better describes the scene than any of the original images is known as image fusion. In numerous domains, including computer vision, robotics, remote sensing, digital imaging, and microscopic imaging, this type of imagery is applicable [14]. It's challenging to produce an image in which every element of the scene is well-defined by virtue of optical lenses, especially those with large focal lengths, have a small depth of field. The focus plane would distort all objects in front of and behind it. Image fusion is a popular approach in addressing this problem, which involves collecting and merging an array of images taken at various focus distances to produce an image with a deep depth of field. The processing levels at which image fusion can be carried out are the pixel, feature, and decision levels [15].
Image fusion at pixel level means fusion at lowest level, referring to the merging of measured physical parameters. It generates a fused image in which each pixel is determined from a set of pixels in various sources and serves to increase the useful information content of a scene such that the performance of image processing task such as segmentation and feature extraction can be improved.
Feature level fusion first employs feature extraction for example by segmentation procedures separately on each source image and then performs a fusion based on the extracted features. Those features can be identified by characteristics such as contrast, shape, size and texture.
Symbol level fusion allows the information from multiple images to be effectively used at highest level of abstraction. The input images are usually processed individually for information extraction and classification.
In recent years, a lot of scholarly attention has been paid to pixel-level image fusion. Fusion methods can be divided into transform domain and spatial domain categories [15]. In spatial domain approaches, source images are fused using local spatial features including local standard derivation, gradient, and spatial frequency [1]. In transform domain techniques, the image’s borders and sharpness are often depicted by projecting source images onto localized bases. As a result, significant features can be found using an image's converted coefficients, each of which corresponds to a transform basis [16].
In comparison to other transform domain approaches, the wavelet transform has a few advantages. In the decomposition, it offers directional information and spatial orientation [17]. Furthermore, because wavelet basis functions are chosen orthogonally, each layer of decomposition contains unique information. As a result, it appears to be an effective way of retaining information from the perspective of implementation, as it integrates mismatch in contrast to the comparable components. This method preserves the spectral information of the source images while merging the detailed information of two SEM images of varying resolutions.
The constraints of the fixed resolution short-time Fourier transform are addressed by the wavelet transform. Because of its multiresolution nature, the wavelet transform is quite popular and has been utilized widely in the field of image processing. Both the frequency and spatial domains show well-localized wavelet coefficients. Additionally, the wavelet decomposition's multi-resolution spirit results in better energy compaction and the decompressed image’s perceptual quality. Since a wavelet basis is made up of functions that have both long and short support (for low and high frequencies, respectively), much more information can be included where it is required. Because of its compactness, the wavelet transform is useful for extracting important characteristics at various resolutions and scales and successfully creates realistic images during fusion.
The discrete wavelet transform (DWT) preserves visual information by enabling image decomposition in various coefficients. In order to properly collect the information in the original images, these coefficients from various images can be suitably merged to create new coefficients. After the coefficients are combined, the inverse discrete wavelet transform (IDWT) is utilized to create the final fused image while maintaining the information contained in the combined coefficients.
When it comes to signal reconstruction, the information that continuous wavelet transform (CWT) provides is quite redundant. With a significant reduction in calculation time, DWT offers sufficient information for analysis and synthesis. Filtering techniques yield a signal's time-scale representation. At various scales, filters with varying cut-off frequencies are employed. High-frequency analysis is done with high-pass filters, and low-frequency analysis is done with low-pass filters. Up sampling and down sampling processes alter the signal's resolution after it has passed through filters. Up sampling involves adding additional samples to the signal, while down sampling involve removing part of the signal's samples.
The signal is decomposed by DWT into detail information and a crude approximation. Two sets of functions known as scaling and wavelet functions are used in DWT. They are both associated with high-pass and low-pass filters, respectively as shown below in Fig. 1.
Fig. 1 [Images not available. See PDF.]
Discrete wavelet transformation
It is possible to iterate the DWT process using consecutive approximations. There are numerous lesser resolutions of the original signal. We refer to this procedure as multi-level wavelet analysis.
Similar algorithms can be used for images using scaling functions derived from one-dimensional wavelets by tonsorial product and two-dimensional wavelets. In the horizontal (x) and vertical (y) directions, these wavelet functions are the simple product of one-dimensional wavelet functions. Taking into consideration a digital image with dimensions of M x N pixels in both the horizontal and vertical directions. F(x, y) represents the image, which has the same spatial resolution (r) in both directions. The two-dimensional decomposition is the result of two procedures in which every row of the matrix (image) is regarded as a one-dimensional signal. The initial stage is defined by applying filters H and L to every matrix row. Two matrices with half as many columns as the original image but the same number of rows are the end result. It is assumed that each of these matrices is made up of columns of one-dimensional signals. The columns are subjected to filters H and L. Four square matrices with half as many rows and half as many columns as the original image are the end product of that technique. The scaling function and the three wavelet functions make up the four outcome matrices (images). The approximation coefficients at level j are decomposed into four parts using two-dimensional DWT decomposition: the approximations at level j + 1 and details in three different orientations (horizontal, vertical, and diagonal). The approximation coefficients at level j are decomposed into four parts using two-dimensional DWT decomposition: the approximations at level j + 1 and details in three different orientations (horizontal, vertical, and diagonal).
In this case fusion is done at the pixel level. Initially the original images are appropriately registered so that the relevant pixels should be co-aligned, as a need for effective image fusion. Figure 2 show an image fusion framework based on the DWT and the steps followed in the DWT based image fusion approach is explained below.
DWT (Daubechies filter) is used to process both the registered source images in the first phase, yielding low-frequency approximate parts and the components with high-frequency , where dl = 1,2,3,4….i. denotes the decomposition levels with the resolution P and i represents the maximum decomposition level.
Let and are the variances of respectively. The high-frequency detail sections of the SEM images of varied resolutions are fused in such a way that they are at level after fusion as follows
1
and the estimated low-frequency component is2
where and are the weighted coefficients and are selected in a manner that .Fig. 2 [Images not available. See PDF.]
Steps in DWT based Image Fusion
Optimization of decomposition levels
Image fusion performance may vary depending on the wavelet transform's decomposition level. The number of decomposition levels must be optimized utilizing optimization methods in order to produce the best fusion results. Figure 3 illustrates the process of employing multi-objective optimization methods to optimize the number of decomposition levels in a wavelet-based image fusion.
Step 1: For the wavelet-based image fusion, let J be the number of decomposition levels. The range of values for J is
3
where the original images are N by N in size. Generally speaking, the sub-images' pixels will deform if J is too high, making it impossible for the decomposition to utilize the many scales. Consequently, one crucial factor in wavelet-based image fusion is the number of decomposition stages. Fusion takes place in the spatial domain when J is equal to zero. As a result, a single fusion model that incorporates wavelet-based and spatial fusion may be obtained.
Step 2: the one-approximation sub-band and 3 × J information for the registered images A and B are determined by calculating their respective DWTs to the designated decomposition level.
Step 3: Salient features in each original image are found for the wavelet coefficient details, and these features impact the fused image. A salient feature is defined as a local energy in the neighborhood of a coefficient
4
where is the wavelet coefficient at location (s, t), and (p, q) defines a window of coefficients around the current coefficient. The fused coefficient is replaced with the most salient coefficient, and the less salient coefficient is thrown away. The implementation of the selecting mode is as follows along with the component with higher variance value as shown in Eq. 1.
5
where are the final fused wavelet coefficient, and are the current coefficients of A and B at level j.Step 4: To compute the approximation of the fused image F for wavelet coefficient approximations, employ weighted factors are employed as expressed in Eq. 2.
Step 5: The fused image F is obtained by finding the inverse transform using the new sets of coefficients. Multi-objective optimization methods are employed at this decomposition level to obtain the best evaluation metrics and the optimal decision variables for wavelet-based image fusion.
Step 6: Loop termination occurs if the maximum number is achieved; otherwise, add one to J and go to Step 2.
Step 7: The best decomposition level should be chosen based on the evaluation metrics and requirements.
The result in the time domain is generated by utilizing the same fusion rule to transform all spectral components of the fused image using the inverse wavelet transform. The results are shown below in Figure 4.
Fig. 3 [Images not available. See PDF.]
Flowchart for optimization of decomposition level of DWT
Fig. 4 [Images not available. See PDF.]
SEM images of aluminium metal matrix composites a with magnification × 200 b with magnification × 500 c Fused image
Grain detection
The information of an image is reflected in the image's edges. They contain the image's essential character. Edges in an image related to intensity discontinuities caused by differences in object surface reflectance, variable lighting conditions, or different distances and orientations of objects from the viewer [18, 19]. In image analysis and computer vision, edge detection is a common problem with critical implications. Edges, on the other hand, are seen at a variety of resolutions or scales that reflect gradients or transitions at various intensities [20]. The spatial gradient is perhaps the most used approach for recognizing edges in an image. This approach uses the local extrema to threshold the edges in the distinctive image. The Canny edge detector was selected for a variety of reasons, including the fact that it is less likely to be "fooled" by noise and, as a result, more capable of identifying genuine weak edges, which are crucial for identifying grain edges. The twofold thresholding of the canny edge detector is essential for detecting edges. Only weak edges that are related to strong edges are reported by the method, in which a pair of thresholds is applied to differentiate between strong and weak edges [21, 22–23]. This operator employs a multi-stage methodology as explained below.
Step-1: Image smoothening of each high frequency components of the fused image with Gaussian filter to reduce noise and unwanted details and textures
6
where is the smoothened result of the image I and is the Gaussian kernel defined as7
Step-2: Computation of the gradient of the filtered image using any of the gradient operators such as Sobel, Prewit etc. to get the resultant as follows
8
and9
Step-3: Application of threshold on the gradient image to obtain a binary image
10
where the threshold T is selected in such a way that most of the noise content is suppressed by preserving all edge elements.Step-4: Suppression of non-maxima pixels in the edges of to thin the edge ridges and to do so, it is checked to find whether each non-zer is superior as compared to its two neighbors along the direction of the gradient or not. If so, then is kept unaffected, otherwise set to 0.
Step-5: Then, in order to obtain two binary images B1 and B2, the previous result is threshold by two separate thresholds T1 and T2 (where T1 < T2).
Step-6: Eventually the edge segments in B2 are linked to form continuous edges. This is done by tracing each segment in B2 to its end and thereafter its neighbors in B1 is examined to find any edge segment in B1 to fill the gap before another edge segment in B2 is reached.
The illustration in Fig. 5 depicts a block diagram for grain edge extraction in SEM images. As per the shown method, the result in the fused image is decomposed using DWT up to 3rd decomposition level, and then its transformed spectral components are processed via the canny edge detector to extract the required edge map of the grains as shown in Fig. 6(a). Following edge extraction, the edge map is cleaned by applying certain morphological operators such as erosion and dilation and thereafter, the granular size, on average can be calculated depending on the number of grains in the source image of the hybrid composite.
Fig. 5 [Images not available. See PDF.]
Edge detection approach
Fig. 6 [Images not available. See PDF.]
a Detected edge map using canny edge detector b Detected grains with specific edges with morphological operation
Comparative analysis
The edge map resulting from the fused source image of the hybrid composite as per the suggested approach in the previous section is thereby compared with the edge map resulting from the traditional and efficient canny edge detection operator in the spatial domain. The comparison is based on certain statistical measures such as Peak Signal to Noise Ratio (PSNR) and Entropy.
Peak signal to noise ratio (PSNR)
One way to think of PSNR is as a gauge of how effective a detection method is. It relates to RMSE, which is
11
where, is the accepted image for comparison, and is the image that was recognized as the output12
is the image's highest possible intensity value. An improved edge detection procedure will have a.
higher PSNR value.
Entropy
It refers to the information content's predictive value. The entropy [24] can be used to quantify image similarity. In several significant applications of performance evaluation, Shannon entropy is frequently used and is stated as follows.:
13
where denotes entropy, denotes a discrete random variable , and indicates the likelihood that the test and reference images will form a 2D joint histogram. The more valuable it is, the more informative and value-added detection method is recommended.Structural similarity index (SSIM)
The structural similarity index essentially measures three basic properties of an image: luminance, contrast, and structure. The SSIM is the multiplicative combination of all these three terms [25].
14
, and are the terms for luminance, contrast, and structure respectively. A higher value of SSIM indicates a better performance of the fusion process.
Table 1 shows the results of analyzing the edge maps obtained from the two edge detection approaches using the above two statistical measures indicated as well as the average grain size of the grains present on the fused image is also determined for the proposed methodology. The average grain size is found to be 10.887 μm.
Table 1. Performance evaluation based on statistical measure
Methodology | PSNR | Entropy | SSIM |
|---|---|---|---|
Canny Edge Operator | 32.3149 | 0.5021 | 0.7126 |
Proposed DWT-based Edge Detection Approach | 43.5344 | 0.7609 | 0.8139 |
Conclusion
The algorithm used in this research is a straightforward but efficient edge detection method that incorporates additional information from the original image. Though it has a strong reaction in other fields of engineering, wavelet analysis is an emerging discipline in the field of material science. This technique has been shown to be very useful in calculating the average grain size in validation with statistical measures like PSNR and entropy. According to the findings of the statistical analysis, the suggested DWT-based fusion approach produces the enhanced images with better visibility even from an undesirable source. Furthermore, the results of this piece of work demonstrate the potential of using image processing techniques for microstructural analysis of self-lubricating hybrid composite. To demonstrate and support the benefits of the suggested strategy, various mechanical and tribological properties of self-lubricating composites made with other kinds of self-lubricating materials can also be examined.
Authors contributions
T.S wrote the initial draft of the manuscript. S.R.B and K.D have reviewed the initial draft and modified it into a presentable way. T.S prepared the figures depicted in the manuscript. The required analysis part was done by S.R.B and K.D.
Funding
Open access funding provided by Siksha 'O' Anusandhan (Deemed To Be University). No funding was received to assist with the preparation of this manuscript.
Data availability
Data supporting this study are openly available from (SCOPUS) at ().
Declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
1. X-bo, L; Gang, YU; Jian, GUO; Shang, Q-y; Zhen-guo, Z; Yi-jie, Gu. Analysis of laser surface hardened layers of automobile engine cylinder liner. J Iron Steel Res Int; 2007; 14,
2. Majumdar, P; Xia, H. A Green’s function model for the analysis of laser heating of materials. Appl Math Model; 2007; 31,
3. Sabat, RK; Sahoo, SK. An ‘ex situ’ electron backscattered diffraction study of nucleation and grain growth in pure magnesium. Mater Des; 2017; 116, pp. 65-76. [DOI: https://dx.doi.org/10.1016/j.matdes.2016.11.091]
4. Sabat, RK; Panda, D; Sahoo, SK. Growth mechanism of extension twin variants during annealing of pure magnesium: an ‘ex situ’ electron backscattered diffraction investigation. Mater Charact; 2017; 126, pp. 10-16. [DOI: https://dx.doi.org/10.1016/j.matchar.2017.02.008]
5. Li, WZ; Ma, HX; Huang, L. Heritability and damage of reticulate structure of high carbon chromium bearing steel. Heat Treat Met; 2012; 37,
6. Wei, W; Xin, Y. Rapid, man-made object morphological segmentation for aerial images using a multi-scaled, geometric image analysis. Image Vis Comput; 2010; 28,
7. Sharma, P. A review on metal matrix hybrid composite Al/SiC/Gr. Int J Electro Mech Mech Behav; 2019; 5,
8. Sahoo, S; Samal, S; Bhoi, B. Fabrication and characterization of novel Al-SiC-hBN self-lubricating hybrid composites. Mater Today Commun; 2020; 25, [DOI: https://dx.doi.org/10.1016/j.mtcomm.2020.101402] 101402.
9. Biswal, SR; Sahoo, S. Fabrication of WS2 dispersed Al-based hybrid composites processed by powder metallurgy: effect of compaction pressure and sintering temperature. J Inorg Organometallic Polym Mater; 2020; 30, pp. 1-8.
10. Smagorinski, ME; Tsantrizos, PG; Grenier, S; Cavasin, A; Brzezinski, T; Kim, G. The properties and microstructure of Al-based composites reinforced with ceramic particles. Mater Sci Eng A; 1998; 244,
11. Radha, A; Vijayakumar, KR. An investigation of mechanical and wear properties of AA6061 reinforced with silicon carbide and graphene nano particles-particulate composites. Mater Today Proc; 2016; 3,
12. Mehedi, MA; Bhadhon, KMH; Haque, MN. Improved wear resistance of Al-Mg alloy with SiC and Al2O3 particle reinforcement. JOM; 2016; 68,
13. Macke, A; Schultz, BF; Rohatgi, P. Metal matrix composites. Adv Mater Process; 2012; 170,
14. Goshtasby, AA; Nikolov, S. Image fusion: advances in the state of the art. Information Fusion; 2007; 8,
15. Mitianoudis, N; Stathaki, T. Pixel-based and region-based image fusion schemes using ICA bases. Information Fusion; 2007; 8,
16. Li, H; Manjunath, B; Mitra, S. Multisensor image fusion using the wavelet transform. Graph Models Image Process; 1995; 57,
17. Pajares, G; Cruz, J. A wavelet-based image fusion tutorial. Pattern Recogn; 2004; 37,
18. Flipon, B; Grand, V; Murgas, B; Gaillac, A; Nicolaÿ, A; Bozzolo, N; Bernacki, M. Grain size characterization in metallic alloys using different microscopy and post-processing techniques materials characterization; 2021; Amsterdam, Elsevier:
19. Patrick, MJ; Eckstein, JK; Lopez, JR; Toderas, S; Asher, SA; Whang, SI; Levine, S. Automated grain boundary detection for bright-field transmission electron microscopy images via U-net. Microsc Microanal; 2023; [DOI: https://dx.doi.org/10.1093/micmic/ozad115]
20. Podor, R; Le Goff, X; Lautru, J; Brau, HP; Massonnet, M; Clavier, N. A semi-automatic method for the segmentation of grain boundaries. J Eur Ceramic Soc; 2021; 41,
21. Xiuqing Wu, Rong Zhou and Yunxizng Xu 2000 A Method of Wavelet-Based Edge Detection with Data Fusion for Multiple Images. Proceedings of the 3rd World Congress on Intelligent Control and Automation.
22. Canny, J. A computational approach to edge detection. IEEE Trans Pattern Analysis Machine Intell; 1986; 6, pp. 679-698. [DOI: https://dx.doi.org/10.1109/TPAMI.1986.4767851]
23. Xishan, T. A novel image edge detection algorithm based on Prewitt operator and wavelet transform. Int J Adv Comput Technol; 2012; 4,
24. Yang, DW; Li, HW; Peng, HM. Reduced-reference image quality assessment based on roberts derivative statistic model of natural image. J Comput Inform Syst; 2012; 8, pp. 1837-1844.
25. Wang, Z; Bovik, AC; Sheikh, HR; Simoncelli, EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process; 2004; 13,
Copyright Springer Nature B.V. Jun 2025