Content area

Abstract

Wavelet-based edge detection methods have evolved significantly over the years, contributing to advances in image processing, computer vision, and pattern recognition. This paper proposes a new local optimal spline wavelet (LOSW) and the dual wavelet of the LOSW. Then, a pair of dual filters can be obtained, which can provide distortion-free signal decomposition and reconstruction, while having stronger denoising and feature capture capabilities. The coefficients of the pair of dual filters are calculated for image edge detection. We propose a new LOSW-based edge detection algorithm (LOSW-ED), which introduces a structural uncertainty–aware modulus maxima (SUAMM) to detect highly uncertain edge samples, ensuring robustness in complex and noisy environments. Additionally, LOSW-ED unifies multi-structure morphology and modulus maxima to fully exploit the complementary properties of low-frequency (LF) and high-frequency (HF) components, enabling multi-stage differential edge refinement. The experimental results show that the proposed LOSW and LOSW-ED algorithm has better performance in noise suppression and edge structure preservation.

Full text

Turn on search term navigation

1. Introduction

In recent years, the theoretical and applied research on spline wavelets has garnered widespread attention, particularly in areas such as image denoising, texture analysis, and signal interpolation, achieving significant advancements. Blu and Unser [1] developed an orthogonalization method for parameterized B-splines, creating orthogonal spline wavelet bases, while Unser and Blu [2] generalized B-splines to fractional orders, optimizing localization in time and frequency. Cho and Lai [3] constructed compactly supported orthogonal B-spline wavelets for signal and image processing. Tavakoli and Esmaeili [4] introduced dual-knot B-spline wavelets for non-uniform nodes in MRA. Pellegrino et al. [5] addressed multi-scale wavelet integral evaluation. Wang et al. [6] proposed wavelet-based solutions for elliptic PDEs. The article by Chui and Quak [7] extended spline wavelet theory by constructing wavelet basis functions on finite intervals, addressing key challenges in applying wavelets to finite domains, particularly for solving PDEs. This work demonstrated wavelets’ potential in engineering, physics, and computational mathematics, offering innovative methods for practical applications. Unser [8] highlighted B-spline wavelets in image segmentation and texture classification, introducing a method based on spline wavelet frames. Blu and Unser [9] combined wavelet, fractal, and RBF theories, showing how spline wavelets process complex, non-stationary signals. Their work advanced wavelet applications in signal processing, fractal modeling, and multidimensional interpolation. Levina and Ryaskin [10] designed a robust coding method using Bent functions and spline wavelets, offering strong noise resistance with low complexity. Khan and Ahmad [11] detailed B-spline wavelets and wavelet packets, analyzing their properties and applications. Shi et al. [12] introduced a fractional wavelet transform (FWT) and its sampling theorem for fractional-order signal processing. Liu et al. [13] proposed a semi-orthogonal B-spline wavelet method for fractional equations, enhancing accuracy and efficiency. Zhou et al. [14] developed a BCSW-KLN network for underwater image enhancement, surpassing traditional methods.

Edge detection is a fundamental vision task to extract clear and continuous contour information from the target scene and provides simplified scene representation for advanced vision tasks, including target detection [15,16,17,18], image segmentation [19,20], and 3D reconstruction [21,22]. However, edge detection results are often affected by background noise, which can degrade edge clarity. Therefore, it is crucial to develop an effective method that not only provides thin and continuous edges but also preserves the underlying structures more faithfully.

In general, edge detection methods can be categorized into two groups: traditional methods (such as gradient-based, wavelet-based, and morphology-based approaches) and learning-based methods. The Canny [23] operator combines the gradient and non-maximum suppression methods to obtain refined edges. The Prewitt [24], Sobel [25], and Laplacian [26] operators use convolution kernels to filter the image and identify edges along gradients. [27] proposed optimizing the Canny operator by introducing a two-stage denoising system in the wavelet transform domain to improve its robustness to noise. Despite their simplicity and effectiveness, these traditional methods often suffer from limitations in noisy or complex scenes. To address these challenges, wavelet-based methods [28,29,30,31,32,33,34,35,36] have shown significant promise in addressing the limitations of traditional operators. Gu et al. [37] and You et al. [30] explored the differentiated processing of LF and HF components to improve edge detection performance. Wang et al. [38] emphasized balancing noise suppression in LF regions and preserving edge details in HF regions. However, these methods often lack a unified approach to fully leverage LF and HF properties, leading to suboptimal performance in scenarios with complex noise or structural ambiguity. Although morphology-based edge detection [39,40] is effective in reducing noise, particularly edge noise, due to its rational structural element design, it still struggles to restore high-frequency edge details.

Recently, learning-based methods [41,42,43,44,45] exhibit advanced performance in detection accuracy. Liu et al. [46] advanced feature integration in convolutional frameworks, while Yousaf et al. [47] reviewed state-of-the-art methods and their role in edge detection. Wang et al. [48] optimized edge detection in satellite images with dense networks and weighted loss functions. Cui and Tian [49] applied AdaBoost and decision trees to ultrasound edge detection. Fan et al. [50] integrated CNNs with gradient operators and attention mechanisms, achieving superior dataset performance. However, these methods fail to account for structure preservation that aligns with the human visual system, leading to edges that are either thick or discontinuous. Additionally, J. Chen et al. [51] proposed the method addresses highlight reflections in industrial metal imaging by using a pretrained model to construct adaptive dynamic masks, preserving texture details. D. Cheng et al. [52] proposed the Light-guided and Cross-fusion U-Net (LCUN), achieving state-of-the-art performance in texture detail recovery and illumination adaptation. J. Xia et al. [53] proposed a novel blind super-resolution framework that integrates meta-learning with a Markov Chain Monte Carlo simulation, effectively addressing variations in image degradation for improved reconstruction quality. S. Yu et al. [54] introduced a method using a time-coding metasurface (TCM) to flexibly modulate radar target characteristics in high-resolution range profiles (HRRPs) by generating and controlling false target scattering phases, offering enhanced capabilities for radar electronic countermeasures through both theoretical and experimental validations. S. Meng et al. [18] presented MINP-Net, a robust infrared small target detection (IRSTD) method combining multiscale gradient and contextual features, noise prediction, and coarse target localization, achieving superior detection accuracy and balance of probabilities and false alarms, validated on the new NCHU-Seg dataset, the largest real-world infrared segmentation benchmark. These methods motivate us to focus on more effective edge preservation consistent with the human visual system, not just accuracy. While uncertainty modeling has been successfully applied in neural detectors [55,56], its potential in wavelet-based edge detection methods remains underexplored. A. Khmag [57,58,59] proposed advanced image processing techniques, including a GAN-based semi-soft thresholding method for Gaussian noise removal, a Perona–Malik model with pulse-coupled neural networks for mixed noise reduction, and an R-DbCNN combined with second-generation wavelets for natural image deblurring, all aimed at enhancing image quality and preserving details.

Spline wavelets, due to their smoothness and regularity, can be very effectively applied to tasks such as image denoising, edge detection, and image compression. This provides an important inspiration for us to explore a new local optimal spline wavelet, and propose a new LOSW-ED algorithm. We aim to explore the integration of spline wavelet modulus maxima and morphological processing, along with the novel SUAMM algorithm, to address noise suppression and edge preservation challenges. At the same time, it also provides a new idea for image edge detection by combining wavelet analysis with a deep neural network. Our contributions to this work can be summarized as follows:

  • We propose a new local optimal spline wavelet (LOSW) according to the method of constructing a spline wavelet proposed by Chui [60]. Then, according to the construction method of the dual wavelet, we propose the dual wavelet of LOSW. At the same time, a pair of dual filters can be obtained, which can provide distortion-free signal decomposition and reconstruction, while having stronger denoising and feature capture capabilities. Finally, the coefficients of the pair of dual filters are calculated for image edge detection.

  • We propose LOSW-ED, a unified edge detection algorithm that integrates spline wavelet modulus maxima, morphological processing, and uncertainty modeling to achieve the trade-off between edge preservation and noise suppression. Specifically, we introduce a novel Structural Uncertainty Aware Modulus Maxima (SUAMM) algorithm, which targets fuzzy regions in low-frequency components via mean guidance and extracts high-uncertainty edge features using the standard deviation of the modulus from high-frequency components. Finally, morphological reconstruction cooperates edge maps from wavelet reconstruction and morphology to generate smooth and consistent edges.

  • We devise qualitative and quantitative experiments on multiple image datasets, showcasing the effectiveness of the LOSW-ED in image edge detection. Our LOSW-ED algorithm provides an effective alternative for scenarios with limited data, noisy environments, or computational limitations.

2. Methodology

In this section, we propose a new local optimal spline wavelet (LOSW) for image edge detection. Then, we introduce our proposed LO-spline wavelet edge detector (LOSW-ED).

2.1. A New Local Optimal Spline Wavelet (LOSW)

2.1.1. Local Optimal Spline Algorithm

In our work, LOSW is based on the local optimal Spline Algorithm proposed by Chen and Cai [61], as shown in Equation (1), and based on local optimal spline, we proposed a new LOSW algorithms according to Chui’s [60] method.

(1)L(t)=95δ01915δ1+815δ32115δ2*β3(t),

β3(t) is the cubic B-spline basis function with central symmetry, given by the following:

β3(t)=i=04(1)i4!4it+n2in1·μt+n2i,tR,

with μ(t) being the unit step function:

μ(t)=0,t<0,1,t0,

and δi(t) is the average shift operator:

δi*β3(t)=β3(ti)+β3(t+i)2.

So, we can obtain the equivalent expression of the Equation (1) as follows:

(2)L(t)=95β3(t)1930β3(t+1)+415β3t+32130β3(t+2)1930β3(t1)+415β3t32130β3(t2),

L(t) can inherit nearly all the favorable properties of β3(t), including analyticity, central symmetry, local support, and high-order smoothness.

The Fourier transform expressions of L(t):

(3)L^(ω)=951915cosω+815cos3ω2115cos2ωsinω2ω24.

2.1.2. Constructing Local Optimal Spline Wavelet (LOSW)

Chui [60] and Graps [62] have proved B-spline βm(t) is the scale function of the corresponding multi-resolution analysis. Zhou [14] has introduced L(t), which is formed by the linear combination of B-spline β3(t) translation and expansion, forms a general multi-resolution analysis (GMRA) in L2(R), L(t), as a scale function, can construct a new local optimal wavelet ψ(t). Let L*(t) be the dual scaling function of L(t) and ψ*(t) be the dual wavelet of ψ(t).

Their two-scale equations are as follows:

L(t)=nZpnL(2tn),ψ(t)=nZqnL(2tn)L*(t)=nZhnL*(2tn),ψ*(t)=nZgnL*(2tn)

Their representation in the frequency domain is as follows:

(4)L^(ω)=P(eiω2)L^ω2,ψ^(ω)=Q(eiω2)L^ω2,L^*(ω)=H(eiω2)L^*ω2,ψ^*(ω)=G(eiω2)L^*ω2,

The two-scale symbols are introduced by the following:

(5)P(z)=12nZpnzn,Q(z)=12nZqnzn,H(z)=12nZhnzn,G(z)=12nZgnzn,

where z=eiω2.

(6)M(z)=P(z)P(z)Q(z)Q(z)

plays an essential role. Hence, we consider the determinant:

ΔP,Q(z):=detMP,Q(z),

Hence, ΔP,Q(z)0, they satisfy the following relations, respectively, (|z|=1):

(7)H(z)=Q(z)ΔP,Q(z),G(z)=P(z)ΔP,Q(z).

From Equation (4), the low-pass filter of ψ(t) can be obtained as follows:

(8)P(ω)=L^(2ω)L^(ω)=2719cosω+8cos3ω2cos2ω2719cosω2+8cos3ω4cosωcos4(ω2).

We can find a suitable Q(z) in the frequency domain, satisfying the condition ΔP,Q(z)0, as follows:

(9)Q(ω)=eiωsin2ω2

From Equation (7), we can obtain the representations of H(ω),G(ω).

Taking the inverse Fourier transform of Equation (5) yields the following:

(10)pk=12π02πP(ω)eikωdω,

(11)qk=12π02πQ(ω)eikωdω,

where kZ, and k are integers symmetric about the origin, the reconstructed low-pass filter coefficients pk of the LOSW in image processing can then be obtained from Equation (10). And from Equation (11), we can calculate a series of values for qk as given in Table 1 and Figure 1, which are the reconstructed high-pass filter coefficients. Meanwhile, from Equations (12) and (13) to have the following:

(12)hk=12π02πH(ω)eikωdω,

and

(13)gk=12π02πG(ω)eikωdω,

where hk is the decomposition low-pass filter coefficients and gk is the decomposition high-pass filter coefficients. In Table 2 and Figure 2, their partial values are shown.

The decomposition and reconstruction processes use two different sets of filters, respectively. It was decomposed with hn and gn, the reconstruction used a different pair of filters pn and qn.

2.2. Overview of Proposed LOSW-ED

The diagram of the proposed new LOSW-ED algorithm is shown in Figure 3. Specifically, the input image is decomposed using the LOSW to obtain its LF component (cA) and HF components (cH, cV, cD). We apply MSANM to cA to obtain the LF feature map Ed with structural details, followed by SUAMM to detect the finer HF edge Mf with rich semantic structure, including CH, CV, CD. Then, we use Mf to perform the LOSW reconstruction to produce Er. Finally, we obtain the final output Fr by performing morphological reconstruction (MRec) on Er and Ed. The following Section 2.3, Section 2.4 and Section 2.5 describe the specific algorithmic implementations of each operation in detail.

2.3. Multi-Structure Anti-Noise Morphology (MSANM)

Inspired by [29,40], we simplify the design of multi-structure anti-noise morphological operators, we combine structural elements in three various directions to comprehensively consider texture information in each direction of the detected object. We denote g(x) as the input gray-scale image and the structural elements are denoted as λ(x). Considering the uniformity of the response intensity in different directions of the edges and the orientation of the wavelet components to efficiently detect the edges in all directions, we introduce three structural elements from [29] tailored to the LF sub-hand as follows:

(14)λ1=μ0.510.51210.510.5,λ2=μ00.500.50.50.500.50,λ3=μ0.500.500.500.500.5

where the weight μ amplifies the intensity of region brightness. The value of μ is 2.

Finally, the simplified multi-structure anti-noise operator is expressed as follows:

(15)Ed=(Oλ3(Eλ2(Dλ1(g))))(Eλ3(Eλ2(Dλ1(g))))

where Dλ(x) denotes the dilation operation with λ, Eλ(x) denotes the erosion operation with λ, Oλ(x) denotes the opening operation with λ, Ed denotes the edge image obtained by the multi-structure anti-noise morphological operator. This operator suppresses edge noise effectively, smooths the boundaries of larger objects, and retains important structural information by utilizing the difference in secondary corrosion.

2.4. Structural Uncertainty–Aware Modulus Maxima (SUAMM)

In this section, we introduce the novel structural uncertainty–aware modulus maxima (SUAMM), which is more efficient in computation and has a stronger perception capability of long-range pixel changes with a larger receptive field not just considering neighbor pixel difference. Unlike traditional modulus maxima (MM) [28], SUAMM is a global feature filter with local structure perception to improve the structural uncertainty awareness of modulus maxima. Specifically, this algorithm consists of three parts: structural uncertainty–aware feature selection (UAFS), adaptive threshold filter (ATF), and uncertainty–aware modulus maxima. The pseudo-code for SUAMM is given in Algorithm 1. We first calculate the modal value and gradient direction (angle) of the detected wavelet component region, which can be defined as follows:

(16)Cx=C1(x,y)x

(17)Cy=C1(x,y)y

where C1(x,y) is the LOSW components of the first level of decomposition. Cx and Cy are the gradients of the different wavelet components in the horizontal and vertical directions. Then, determine the neighborhood coordinates of the pixel based on the angle. For modulus of each component Mu(x,y) can be expressed as follows:

(18)Mu(x,y)=Cx2+Cy2

where |Cx| and |Cy| are the modulus components corresponding to the x and y directions. Similarly, the computation of the modal values of the high-frequency components Ms(x,y) is specified as follows:

The direction of the separate wavelet components Au can be expressed as follows:

(19)Au=arctanCyCx

We follow the [28] and choose ΘcH, ΘcV, and ΘcD to determine whether there is an approximate edge gradient direction and obtain the corresponding neighbor coordinates.

(20)ΘcH=θ|θ[0,π8)[15π8,2π)[7π8,9π8)}

(21) Θ c V = θ | θ [ 3 π 8 , 5 π 8 ) [ 11 π 8 , 13 π 8 ) }

(22) Θ c D = θ | θ [ π 8 , 3 π 8 ) [ 9 π 8 , 11 π 8 ) [ 5 π 8 , 7 π 8 ) [ 13 π 8 , 15 π 8 ) }

For adaptive threshold filtering (ATF), the threshold value Tδ is computed as the average of the maximum and minimum standard deviation of modulus values across all pixels (δmax,δmin). This approach adapts to differences in edge strengths across gradient directions, enabling consistent edge detection performance under varying conditions.

(23)Tδ=δDmax+δDmin/2

To detect highly uncertain edges, we conduct uncertainty–aware modulus maxima detection expressed as Equation (23). A pixel (m,n) is retained as part of the edge structure if it satisfies two conditions: (1) The gradient direction θm,n aligns with a valid direction in the set Θ, i.e., θm,nΘ. (2) The standard deviation δm,n of the pixel exceeds the threshold Tδ, i.e., δm,n>Tδ. This approach ensures that local maxima in terms of standard deviation, constrained by the gradient direction, are retained while suppressing other edge pixels. By replacing the traditional gradient modulus comparison with a comparison based on pixel-level standard deviation and the ATF, the method achieves sparsification of the edge representation while highlighting highly uncertain areas. Non-zero values in the resulting matrices CHm,n,CVm,n,CDm,n form the final set of edge coefficients, representing highly uncertain edge locations in the image.

(24)CHm,n,CVm,n,CDm,n=CHm,n,CVm,n,CDm,n,ifθm,nΘandδm,n>Tδ0,otherwise

where θm,nΘ denotes the gradient direction at pixel (m,n), and δm,n represents the modulus value at (m,n). This ensures that only local maxima satisfying the gradient direction constraint and having a modulus value greater than the threshold Tδ are retained, effectively sparsifying the edges. Non-zero values in CHm,n,CVm,n,CDm,n form the final matrix of edge coefficients. Therefore, the whole SUAMM can be expressed as follows:

(25)Mf[CH,CV,CD]=F(Mu,Au)(Ed,cH,cV,cD)

where F denotes SUAMM F(Mu,Au)(·), which generates the high uncertainty HF edge samples (Mf). We leverage Mf and coarse samples of Ed for wavelet reconstruction.

Algorithm 1 analyzes the structural statistics of the low-frequency Ed, including the mean (μA) and standard deviation (δD). We follow the one σ principle of Normal distribution. Each standard deviation of each high-frequency component modulus δHn,m, δVn,m, δDn,m is calculated. The low-frequency component distribution NaN(μA,δA), is defined as the candidate edge information. The deviation of the standard deviation is then calculated to determine if it satisfies the error range Tδ. This is equivalent to converting the detection deviation of the edge to a global threshold setting, which determines the final selected region of the true detection edge.

Algorithm 1 Structural Uncertainty–aware Modulus Maxima (SUAMM)
  • Input: Ed, cH, cV, cD

  • Output: Modulus Maxima Maps CH, CV, CD of size (H×W)

  • B[7,7]                      ▹ Block size

  • STB

  • μAmean(Ed)

  • δAstd(Ed)

  • for  i1  to  HB[1]+1  step  ST[1]  do

  •     for j1 to WB[2]+1 step ST[2] do

  •         Compute μAi,j for block Bi,j

  •         if NaN(μA,δA), μAi,j(μAσμA,1) then        ▹ UAFS

  •            for each pixel (m,n) in block Bi,j do

  •                Extract θm,n and δHm,n, δVn,m, δDn,m

  •                Compute maxima via Equation (24).

  •            end for

  •         else

  •            Set CHi,j,CVi,j,CDi,j0

  •         end if

  •     end for

  • end for

2.5. Morphological Reconstruction Strategy

Morphological reconstructive fusion ensures that the final edge map has more consistent and refined edges, leading to more accurate detection of the scene’s semantics contours and making it easier to distinguish from the background. The adaptive fusion stream of the LOSW decomposition preserves critical texture information in the image, but unnatural structural textures are present. The wavelet reconstructed stream captures the highly uncertain structural representation of the image more accurately but with limited texture retention. Therefore, we use the Er as a mask aligned with the Ed for thin and smooth edge detection while suppressing noise and undesired textures. Fr obtained by morphological reconstruction is as follows:

(26)Fr=imreconstructEEd,Er

where imreconstruct denotes morphological reconstruction.

3. Experiments

To demonstrate the effectiveness of the proposed LO-spline wavelet (LOSW) and our LOSW-ED, we evaluate the performance on the BSDS-500 [63] with 200 test images for edge detection qualitatively and quantitatively. We compare the results with six traditional gradient-based methods (Canny, Sobel, Prewitt, WTMM [28], WTMM-f [29], WTMM-e [30]), and five learning-based methods (SE [41], HED [42], BDCN [43], PiDiNet [44], and UAED [45]). Besides, we evaluate our proposed algorithm using different wavelets, including Haar, DB2, Coif1, Rbio3.5, Sym4, and Bior3.5. In addition, we consider Gaussian (G), salt&pepper (SP), and speckle (S) noise with a ratio of 0.05 to demonstrate the noise robustness of LOSW-ED. We implemented our algorithm and experiments in MATLAB R2022b and conducted tests on a 12th Gen Intel(R) Core(TM) i5-12490F CPU. We directly tested the methods using their officially provided code. For methods without publicly available code, we reimplemented them based on the descriptions in the respective papers to ensure a fair comparison.

3.1. Evaluation Metrics

We consider RMSE, PSNR, and Figure of Merit (FOM) to evaluate the accuracy and noise robustness of the proposed LOSW-ED. Additionally, we incorporate Visual Information Fidelity (VIF) [64], and BRISQUE [65] scores to assess the fidelity of the detected edge information. Lower RMSE or higher PSNR values, along with higher FOM, indicate better alignment between the detected edges and the ground truth. Higher VIF values and lower BRISQUE scores collectively suggest that the detected edges exhibit greater consistency and naturalness in terms of visual information compared to the ground truth edges.

3.2. Comparison Results

Figure 4 shows the comparison results with other competitive edge detection methods in the BSDS-500 dataset. We can observe that traditional operators tend to detect a significant number of false, noisy, and discontinuous edges, which are easily affected by noise and less robust. In contrast, our proposed smoother contours and removes irrelevant edges from the background. By examining the local edge details, it is evident that the edges reconstructed by the LOSW are finer and preserve main object structures maximally. From Table 3, the proposed LOSW-ED consistently outperforms traditional edge detection methods (Canny, Sobel, Prewitt) across all noise conditions, achieving the lowest RMSE, highest PSNR, and best FOM. On clean images, LOSW-ED obtains performance gains of 35.2%, 77.4%, and 42.3% in PSNR than Prewitt across different noise conditions. LOSW-ED demonstrates remarkable robustness, maintaining high detection accuracy with FOM values of 57.77, 58.00, and 55.64, respectively. Figure 5 and Figure 6 clearly show the improvements in robustness and detection accuracy over the compared methods.

Table 4 clearly shows that LOSW-ED significantly outperforms all WTMM variants in terms of RMSE and PSNR (22.9373 dB). LOSW-ED achieves the highest FOM (57.08), which is a substantial improvement over WTMM-f and WTMM-e, highlighting its superior edge preservation and noise suppression capabilities. LOSW-ED also achieves lower BRISQUE compared to WTMM methods, indicating better perceptual quality. In addition, LOSW-ED demonstrates competitive advantages over advanced learning-based methods, achieving a second-best FOM. Although UAED achieves the highest FOM, LOSW-ED excels in PSNR, indicating a better overall noise suppression and signal fidelity trade-off. LOSW-ED also outperforms learning-based methods in BRISQUE, suggesting a more natural and visually appealing edge detection result. From Figure 7 and Figure 8, it can be seen that our method effectively preserves the edge information of primary semantic structures while detecting more visually consistent and continuous textures. In contrast, other methods, while capable of capturing major edge contours, tend to produce thicker cues and blurred edges.

Table 5, Table 6 and Table 7 demonstrate that LOSW-ED consistently outperforms state-of-the-art learning-based methods in most metrics and surpasses wavelet-based methods in terms of RMSE, PSNR, and FOM while maintaining competitive VIF and BRISQUE scores. For instance, LOSW-ED achieves the lowest RMSE (0.0896 under G noise, 0.0792 under SP noise) and the highest FOM (57.77 under G noise, 58.00 under SP noise), significantly outperforming other methods in edge localization and noise suppression. Although SE achieves the lowest RMSE in some cases, its low FOM (17.07 under G noise) indicates an unbalanced performance due to excessive suppression of edge structures. LOSW-ED leverages a dual-stage reconstruction process (wavelet reconstruction and morphological reconstruction) and the SUAMM mechanism to effectively balance noise suppression and edge preservation, a challenge for traditional wavelet methods. Unlike learning-based approaches, LOSW-ED achieves robust and precise edge detection without relying on large-scale training data, highlighting its adaptability and efficiency. As shown in Figure 9, the runtime increased by 4% under SP noise, while the increases for G and S noises were less than 2%. This demonstrates the strong robustness of the proposed method against noise and blur. Figure 10 and Figure 11 further illustrate that LOSW-ED preserves thinner edges and detailed contours under various noise conditions without introducing blurring, unlike other methods that exhibit significant blurring and incomplete structures. Specifically, SE overly suppresses edge strength, making it difficult to capture complete structures, while HED fails to detect comprehensive structural edges. LOSW-ED’s superior performance stems from its ability to focus on significantly changing edges, reduce noise interference, and refine edge structures.

To evaluate the effectiveness and performance of the LO-spline wavelet (LOSW) for edge detection, Table 8 shows that LOSW consistently achieves the highest FOM across most noise conditions (57.08 for clean images and 57.77 under G noise) along with competitive PSNR and SSIM values, which demonstrate the consistent robustness for edge detection. These results also highlight LOSW’s exceptional balance between edge localization accuracy, noise suppression, and structural preservation. Despite slight limitations in perceptual quality metrics, LOSW excels in edge detection accuracy and continuity, making it a highly effective and reliable wavelet for edge detection tasks. Figure 12 shows that our LOSW has smoother contours, while other wavelets have more mutated edges, which verifies the leading performance of LOSW in smoothness and flexibility.

(1) Ablation experiments for SUAMM: To validate the effectiveness of the proposed SUAMM, we conducted ablation studies, including block size (B), and the impact of the uncertainty–aware modulus maxima. Table 9 shows that B = 7 achieves the highest FOM score, indicating optimal edge retention with minimal inclusion of irrelevant information. As illustrated in Figure 13, smaller block sizes (B = 7) capture more precise semantic structures, while larger blocks introduce unnecessary noise and degrade performance. From Table 10, SUAMM achieves the highest FOM and BRISQUE scores, validating its superior performance over traditional MM and UAFS+MM. Compared to MM, SUAMM significantly improves edge detection quality while incurring a reasonable increase in computational time. When compared to UAFS+MM, SUAMM reduces computation time by more than half, demonstrating its efficiency.

(2) Ablation experiments for unified LOSW-ED design: From Figure 14, incorporating morphological reconstruction (MRec) effectively suppresses irrelevant background edges, while the introduction of mean-based UAFS selection further filters out background textures and enhances edge retention, resulting in a significant improvement in FOM. As shown in Table 11, LOSW-ED achieves the highest FOM and optimal BRISQUE, where the results preserve structural textures with clear and complete edges while maintaining a clean background. These findings validate the robustness of LOSW-ED in balancing structural preservation and noise suppression, showcasing its superiority and efficiency in edge detection tasks.

4. Discussion

LOSW-ED algorithm demonstrates better performance in edge detection under noisy conditions, excelling in RMSE, PSNR, and FOM. Its robust noise suppression, effective edge preservation, and independence from large-scale training data make it a superior choice compared to both traditional wavelet-based and modern learning-based methods. While there is room for improvement in perceptual quality (BRISQUE) and visual fidelity (VIF), LOSW-ED represents a significant advancement in wavelet-based edge detection. Meanwhile, the LOSW-ED algorithm may perform poorly in the face of other types of noise, such as compound noise; SUAMM modules may place too much emphasis on detecting “high uncertainty” samples and ignore certain clear and imperceptible edges, leading to overfitting of a particular uncertainty distribution.

5. Conclusions

In this paper, we propose a local optimal spline wavelet (LOSW) and a new LO-spline wavelet edge detection algorithm (LOSW-ED), which unifies morphology and modulus maxima to differentially process low-frequency and high-frequency edge features to achieve a trade-off between noise suppression and edge structure preservation. We introduce an uncertainty strategy that prioritizes preserving critical edges in high-frequency regions while mitigating irrelevant noise. With the favorable properties of smoothness and flexibility of LOSW, our results have more noise-resistant and consistent contours. Extensive experiments demonstrate the superiority of LOSW-ED with LOSW in detecting smooth and finer edges. However, our method fails to detect comprehensive edges with limited generalization. Therefore, our future work will introduce a general LOSW-based detector to extend the applications of LOSW and uncertainty guidance to construct a multi-task framework, such as salient detection, semantic segmentation, and depth estimation.

Author Contributions

Conceptualization, D.Z. (Dujuan Zhou) and Z.C.; methodology, D.Z. (Dujuan Zhou) and Z.Y.; software, Z.Y.; validation, D.Z. (Dujuan Zhou), Z.Y. and D.Z. (Defu Zhu); formal analysis, X.S.; investigation, D.Z. (Dujuan Zhou) and Z.Y.; resources, D.Z. (Defu Zhu); data curation, Z.Y.; writing—original draft preparation, D.Z. (Dujuan Zhou) and Z.Y.; writing—review and editing, D.Z. (Dujuan Zhou) and Z.Y.; visualization, X.S.; supervision, D.Z. (Defu Zhu); project administration, Z.C.; funding acquisition, D.Z. (Defu Zhu). All authors have read and agreed to the published version of the manuscript.

Data Availability Statement

BSDS-500: https://github.com/BIDS/BSDS500 (accessed on 4 May 2012).

Acknowledgments

The authors thank Dan He and Fangli Sun for their careful review and advice.

Conflicts of Interest

Author Zhu Defu was employed by the company Galuminium Group Co, Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The authors declare that this study received funding from 2024 Guangdong Province Education SciencesPlanning Project (Higher Education Special) "Research on the construction of first-class curriculumand teaching, innovation of Probability Theory and Mathematical Statistics" with grant number2024GXJK703. The funder had the following involvement with the study: Conceptualization, Methodology, Validation, Formal analysis, Data curation, Writing – original draft, Writing – review and editing, Visualization. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Figures and Tables
View Image - Figure 1. Reconstruction lowpass and highpass filter coefficients.

Figure 1. Reconstruction lowpass and highpass filter coefficients.

View Image - Figure 2. Decomposition lowpass and highpass filter coefficients.

Figure 2. Decomposition lowpass and highpass filter coefficients.

View Image - Figure 3. The diagram of the proposed LOSW-ED algorithm.

Figure 3. The diagram of the proposed LOSW-ED algorithm.

View Image - Figure 4. Visual comparison results on the BSDS-500 dataset with different traditional methods. From top to bottom: (a) Clean Inputs. (b) G(0.05). (c) SP(0.05). (d) S(0.05).

Figure 4. Visual comparison results on the BSDS-500 dataset with different traditional methods. From top to bottom: (a) Clean Inputs. (b) G(0.05). (c) SP(0.05). (d) S(0.05).

View Image - Figure 5. Line chart results of RMSE, PSNR, VIF, BRISQUE, and FOM for wavelet-transform modulus maxima methods under different noise conditions.

Figure 5. Line chart results of RMSE, PSNR, VIF, BRISQUE, and FOM for wavelet-transform modulus maxima methods under different noise conditions.

View Image - Figure 6. Line chart results of RMSE, PSNR, VIF, BRISQUE, and FOM for learning-based methods under different noise conditions.

Figure 6. Line chart results of RMSE, PSNR, VIF, BRISQUE, and FOM for learning-based methods under different noise conditions.

View Image - Figure 7. Visual comparison results on the BSDS-500 dataset with learning-based methods.

Figure 7. Visual comparison results on the BSDS-500 dataset with learning-based methods.

View Image - Figure 8. Comparison results of the previous wavelet transform modulus maxima methods on the BSDS500 dataset.

Figure 8. Comparison results of the previous wavelet transform modulus maxima methods on the BSDS500 dataset.

View Image - Figure 9. Comparison of average processing time under different noise conditions.

Figure 9. Comparison of average processing time under different noise conditions.

View Image - Figure 10. Visual comparison results of different detection methods in different noise conditions, including Gaussian (G), salt and pepper (SP), and speckle (S) with a variance ratio of 0.05. From top to bottom: (a) G(0.05). (b) SP(0.05). (c) S(0.05). The parts of the red box line in the figures can better show the differences between the different detection methods in different noise conditions.

Figure 10. Visual comparison results of different detection methods in different noise conditions, including Gaussian (G), salt and pepper (SP), and speckle (S) with a variance ratio of 0.05. From top to bottom: (a) G(0.05). (b) SP(0.05). (c) S(0.05). The parts of the red box line in the figures can better show the differences between the different detection methods in different noise conditions.

View Image - Figure 11. Visual comparison results of previous wavelet transform modulus maxima methods in different noise conditions, including Gaussian (G), salt & pepper (SP), and speckle (S) with a variance ratio of 0.05. From top to bottom: (a) G(0.05). (b) SP(0.05). (c) S(0.05). The parts of the red box line in the figures can better show the differences between the previous wavelet transform modulus maxima methods in different noise conditions.

Figure 11. Visual comparison results of previous wavelet transform modulus maxima methods in different noise conditions, including Gaussian (G), salt & pepper (SP), and speckle (S) with a variance ratio of 0.05. From top to bottom: (a) G(0.05). (b) SP(0.05). (c) S(0.05). The parts of the red box line in the figures can better show the differences between the previous wavelet transform modulus maxima methods in different noise conditions.

View Image - Figure 12. Visual comparison results of different wavelet baselines in different noise conditions, including Gaussian (G), salt and pepper (SP), and speckle (S) with the variance ratio of 0.05. From left to right: (a) Clean images. (b) G(0.05). (c) SP(0.05). (d) S(0.05). The parts of the red box line in the figures can better show the differences between the different wavelets transform.

Figure 12. Visual comparison results of different wavelet baselines in different noise conditions, including Gaussian (G), salt and pepper (SP), and speckle (S) with the variance ratio of 0.05. From left to right: (a) Clean images. (b) G(0.05). (c) SP(0.05). (d) S(0.05). The parts of the red box line in the figures can better show the differences between the different wavelets transform.

View Image - Figure 13. Filtering results of UAFS under varying block sizes (B). (a) Input; (b) ground truth; (c) B = 7; (d) B = 15; (e) B = 31. When B = 7, the selected regions include more precise semantic structures, whereas larger block sizes introduce more irrelevant information.

Figure 13. Filtering results of UAFS under varying block sizes (B). (a) Input; (b) ground truth; (c) B = 7; (d) B = 15; (e) B = 31. When B = 7, the selected regions include more precise semantic structures, whereas larger block sizes introduce more irrelevant information.

View Image - Figure 14. Comparison results of the proposed method under different ablations. (a) The clean inputs from the BSDS500 datasets. (b): The corresponding ground truth for the BSDS500 datasets. (c) w/o UAFS and MRec represents selecting modulus maxima based solely on the high-frequency component’s standard deviation in each block. (d) w/o MRec indicates post-processing without using morphological reconstruction. (e) The full algorithm. (d) The introduction of MRec effectively suppresses irrelevant background edges. Considering the trade-off between structural preservation and background noise suppression, (e) achieves a cleaner background and edges compared to (c).

Figure 14. Comparison results of the proposed method under different ablations. (a) The clean inputs from the BSDS500 datasets. (b): The corresponding ground truth for the BSDS500 datasets. (c) w/o UAFS and MRec represents selecting modulus maxima based solely on the high-frequency component’s standard deviation in each block. (d) w/o MRec indicates post-processing without using morphological reconstruction. (e) The full algorithm. (d) The introduction of MRec effectively suppresses irrelevant background edges. Considering the trade-off between structural preservation and background noise suppression, (e) achieves a cleaner background and edges compared to (c).

Reconstructed lowpass and highpass filter coefficients.

n 0 1 2 3 4
p n 1.01312 0.56824 −0.00899 −0.07093 0.00249
q n 0.08991 −0.01313 −0.13715 −0.01313 0.08991

Decomposed lowpass and highpass filter coefficients.

n 0 1 2 3 4
h n 0.27711 0.13990 0.01908 −0.02222 −0.01817
g n 0.64664 −0.30374 −0.11063 0.07094 0.04859

Comparative results of different detection methods under various noise conditions with a variance ratio of 0.05, including Gaussian (G), salt and pepper (SP), and speckle (S) noise, and clean images.

Noise Metrics Canny Sobel Prewitt LOSW-ED
Clean RMSE 0.3407 0.1890 0.1882 0.0732
PSNR(dB) 9.4267 14.5355 14.5749 22.9373
FOM 23.39 50.77 50.94 57.08
G(0.05) RMSE 0.4414 0.1678 0.1677 0.1427
PSNR(dB) 7.1469 15.5641 15.5662 21.0517
FOM 14.90 59.57 59.55 57.77
SP(0.05) RMSE 0.4389 0.2404 0.2384 0.0792
PSNR(dB) 7.1711 12.4133 12.4920 22.1706
FOM 14.56 36.76 37.47 58.00
S(0.05) RMSE 0.4281 0.1822 0.1814 0.0886
PSNR(dB) 7.4185 14.8364 14.8727 21.1666
FOM 15.73 52.57 52.91 55.64

Performance comparison of different methods in clean images.

Methods RMSE PSNR(dB) VIF BRISQUE FOM
WTMM (1992) 0.0970 20.4938 0.1567 41.6000 39.95
WTMM-f (2014) 0.2919 10.8182 0.1965 33.2225 46.35
WTMM-e (2023) 0.0954 20.4710 0.1130 42.6737 34.61
SE (TPAMI’14) 0.0809 22.0406 0.2347 40.7760 52.87
HED (ICCV’15) 0.2652 11.7022 0.1878 47.1500 50.48
BDCN (CVPR’19) 0.3287 9.7496 0.1967 50.2418 43.15
PidiNet (ICCV’21) 0.3104 10.2406 0.1930 48.4802 46.78
UAED (CVPR’23) 0.3030 10.4733 0.1834 49.3506 59.79
LOSW-ED 0.0732 22.9373 0.1909 39.3736 57.08

Performance comparison of different methods under the G(0.05) condition.

Methods RMSE PSNR(dB) VIF BRISQUE FOM
WTMM (1992) 0.1110 19.2015 0.1351 43.4665 42.75
WTMM-f (2014) 0.3035 10.4482 0.1715 39.9838 49.71
WTMM-e (2023) 0.1186 18.5234 0.0606 43.4655 39.39
SE (TPAMI’14) 0.0437 27.5614 0.1615 40.2760 17.07
HED (ICCV’15) 0.1231 18.9511 0.0958 46.5754 15.32
BDCN (CVPR’19) 0.2920 10.8566 0.1947 51.2844 41.78
PidiNet (ICCV’21) 0.3009 10.5214 0.1865 45.6956 46.33
UAED (CVPR’23) 0.2908 10.8314 0.1807 49.7186 58.50
LOSW-ED 0.0896 21.0517 0.1460 43.4695 57.77

Performance comparison of different methods under the SP(0.05) condition.

Methods RMSE PSNR(dB) VIF BRISQUE FOM
WTMM (1992) 0.1157 18.8286 0.1176 43.4712 43.85
WTMM-f (2014) 0.3188 9.9935 0.1512 42.3216 50.99
WTMM-e (2023) 0.1103 19.1621 0.0770 43.4800 35.94
SE (TPAMI’14) 0.0506 26.1554 0.1490 40.0534 14.71
HED (ICCV’15) 0.1865 14.9289 0.1467 45.5593 34.67
BDCN (CVPR’19) 0.2922 10.7729 0.1810 52.2578 34.84
PidiNet (ICCV’21) 0.2902 10.8371 0.1746 43.1546 42.99
UAED (CVPR’23) 0.2833 11.0622 0.1742 49.7773 54.57
LOSW-ED 0.0792 22.1706 0.1621 43.3100 58.00

Performance comparison of different methods under the S(0.05) condition.

Methods RMSE PSNR(dB) VIF BRISQUE FOM
WTMM (1992) 0.1087 19.4230 0.1351 43.4671 43.52
WTMM-f (2014) 0.3054 10.3919 0.1740 37.7041 50.46
WTMM-e (2023) 0.1051 19.5975 0.0877 43.4686 37.07
SE (TPAMI’14) 0.0457 27.1708 0.1727 40.1201 18.63
HED (ICCV’15) 0.2417 12.5634 0.1687 46.2769 44.49
BDCN (CVPR’19) 0.3008 10.4980 0.1602 51.9128 27.40
PidiNet (ICCV’21) 0.2994 10.5607 0.1860 46.4257 45.59
UAED (CVPR’23) 0.2911 10.8217 0.1781 49.8845 57.06
LOSW-ED 0.0886 21.1666 0.1496 43.4630 55.64

Comparison results of LOSW-ED using different wavelets on the BSDS-500 dataset under different noise conditions.

Noise Metrics DB2 Rbio3.5 Coif1 Sym4 Bior3.5 Haar LOSW
Clean Input RMSE 0.0688 0.0747 0.0676 0.0739 0.0721 0.0722 0.0732
PSNR(dB) 23.4730 22.7550 23.6423 22.8308 23.0612 23.0442 22.9373
VIF 0.1894 0.1971 0.1910 0.1895 0.1840 0.1929 0.1909
BRISQUE 39.1703 37.7604 39.6490 38.6668 40.7234 39.3707 39.3736
FOM 51.12 54.52 50.80 56.49 56.11 56.97 57.08
G(0.05) RMSE 0.0777 0.0850 0.0760 0.0867 0.0971 0.0861 0.0896
PSNR(dB) 22.3015 21.5199 22.5201 21.3403 20.3190 21.3921 21.0517
VIF 0.1504 0.1598 0.1503 0.1494 0.1253 0.1494 0.1460
BRISQUE 43.4602 43.4601 43.4612 43.4644 43.5025 43.4664 43.4695
FOM 49.63 58.14 47.40 59.41 50.06 59.43 57.77
SP(0.05) RMSE 0.0793 0.0829 0.0719 0.0827 0.0591 0.0738 0.0792
PSNR(dB) 22.1663 21.7668 23.0533 21.7672 24.8240 22.8320 22.1706
VIF 0.1591 0.1676 0.1693 0.1589 0.1646 0.1782 0.1621
BRISQUE 41.3525 43.3472 41.4242 42.5220 41.3149 42.1397 43.3100
FOM 47.04 57.69 48.11 55.53 49.16 56.87 58.00
S(0.05) RMSE 0.0793 0.0858 0.0772 0.0870 0.0919 0.0859 0.0886
PSNR(dB) 22.1403 21.4686 22.3962 21.3194 20.8386 21.4321 21.1666
VIF 0.1526 0.1603 0.1526 0.1515 0.1367 0.1528 0.1496
BRISQUE 43.4602 43.4593 43.4598 43.4611 43.4669 43.4605 43.4630
FOM 50.14 54.99 48.2000 57.4 49.32 57.23 55.64

Performance comparison for varying block sizes (B).

Metrics B = 7 B = 15 B = 31
RMSE 0.0732 0.0732 0.0732
PSNR(dB) 22.9373 22.9344 22.9372
VIF 0.1909 0.1909 0.1910
BRISQUE 39.3736 39.3719 39.3732
FOM 57.08 57.07 57.07

Ablation results of the traditional modulus maxima (MM) [28] and the proposed SUAMM.

Methods BRISQUE FOM Time (s)
MM 39.3810 48.62 0.0766
UAFS+MM 41.0236 48.65 0.2708
SUAMM 39.3736 57.08 0.1274

Comparison of LOSW-ED variants on RMSE, PSNR, VIF, BRISQUE, and FOM metrics.

Baselines RMSE PSNR(dB) VIF BRISQUE FOM
LOSW-ED(w/o UAFS & MRec) 0.1227 18.3812 0.1892 31.4711 48.94
LOSW-ED(w/o MRec) 0.1227 18.3812 0.1892 31.4715 50.81
LOSW-ED 0.0732 22.9373 0.1909 39.3736 57.08

References

1. Blu, T.; Unser, M. A Complete Family of Scaling Functions: The (α,β)-Splines. IEEE Trans. Signal Process.; 1999; 47, pp. 2839-2848.

2. Unser, M.A.; Blu, T. Construction of fractional spline wavelet bases. Proceedings of the Wavelet Applications in Signal and Image Processing VII; Denver, CO, USA, 19–23 July 1999; Volume 3813, pp. 422-431.

3. Cho, O.; Lai, M.J. A class of compactly supported orthonormal B-spline wavelets. Splines Wavelets; 2005; pp. 123-151.

4. Tavakoli, A.; Esmaeili, M. Construction of Dual Multiple Knot B-Spline Wavelets on Interval. Bull. Iran. Math. Soc.; 2019; 45, pp. 843-864. [DOI: https://dx.doi.org/10.1007/s41980-018-0169-8]

5. Pellegrino, E.; Sorgentone, C.; Pitolli, F. On the Exact Evaluation of Integrals of Wavelets. Mathematics; 2023; 11, 983. [DOI: https://dx.doi.org/10.3390/math11040983]

6. Wang, J.; Shi, W.; Hu, L. Wavelet Numerical Solutions for a Class of Elliptic Equations with Homogeneous Boundary Conditions. Mathematics; 2021; 9, 1381. [DOI: https://dx.doi.org/10.3390/math9121381]

7. Chui, C.K.; Quak, E. Wavelets on a Bounded Interval. Numer. Methods Partial. Differ. Equations; 1994; 10, pp. 225-236.

8. Unser, M. Texture Classification and Segmentation Using Wavelet Frames. IEEE Trans. Image Process.; 1995; 4, pp. 1549-1560. [DOI: https://dx.doi.org/10.1109/83.469936]

9. Blu, T.; Unser, M. Wavelets, Fractals, and Radial Basis Functions. IEEE Trans. Signal Process.; 2000; 48, pp. 3026-3036. [DOI: https://dx.doi.org/10.1109/78.984733]

10. Levina, A.; Ryaskin, G. Robust Code Constructions Based on Bent Functions and Spline Wavelet Decomposition. Mathematics; 2022; 10, 3305. [DOI: https://dx.doi.org/10.3390/math10183305]

11. Khan, S.; Ahmad, M.K. A Study on B-Spline Wavelets and Wavelet Packets. Appl. Math.; 2014; 5, pp. 3001-3010. [DOI: https://dx.doi.org/10.4236/am.2014.519287]

12. Shi, J.; Liu, X.; Sha, X.; Zhang, Q.; Zhang, N. A Sampling Theorem for Fractional Wavelet Transform with Error Estimates. IEEE Trans. Signal Process.; 2017; 65, pp. 4797-4811. [DOI: https://dx.doi.org/10.1109/TSP.2017.2715009]

13. Liu, C.; Zhang, X.; Wu, B. Quasilinearized Semi-Orthogonal B-Spline Wavelet Method for Solving Multi-Term Non-Linear Fractional Order Equations. Mathematics; 2020; 8, 1549. [DOI: https://dx.doi.org/10.3390/math8091549]

14. Zhou, D.; Cai, Z.; He, D. A New Biorthogonal Spline Wavelet-Based K-Layer Network for Underwater Image Enhancement. Mathematics; 2024; 12, 1366. [DOI: https://dx.doi.org/10.3390/math12091366]

15. He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. IEEE Trans. Pattern Anal. Mach. Intell.; 2020; 42, pp. 386-397. [DOI: https://dx.doi.org/10.1109/TPAMI.2018.2844175] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29994331]

16. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-end object detection with transformers. European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2020; pp. 213-229.

17. Guo, Y.; Zou, B.; Ren, J.; Liu, Q.; Zhang, D.; Zhang, Y. Distributed and Efficient Object Detection via Interactions Among Devices, Edge, and Cloud. IEEE Trans. Multimed.; 2019; 21, pp. 2903-2915. [DOI: https://dx.doi.org/10.1109/TMM.2019.2912703]

18. Meng, S.; Zhang, C.; Shi, Q.; Chen, Z.; Hu, W.; Lu, F. A Robust Infrared Small Target Detection Method Jointing Multiple Information and Noise Prediction: Algorithm and Benchmark. IEEE Trans. Geosci. Remote. Sens.; 2023; 61, 5517117. [DOI: https://dx.doi.org/10.1109/TGRS.2023.3295932]

19. Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.Y. et al. Segment anything. Proceedings of the IEEE/CVF International Conference on Computer Vision; Paris, France, 2–3 October 2023; pp. 4015-4026.

20. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference; Munich, Germany, 5–9 October 2015; Proceedings, Part III 18 Springer: Berlin/Heidelberg, Germany, 2015; pp. 234-241.

21. Lv, C.; Lin, W.; Zhao, B. Voxel Structure-Based Mesh Reconstruction From a 3D Point Cloud. IEEE Trans. Multimed.; 2022; 24, pp. 1815-1829. [DOI: https://dx.doi.org/10.1109/TMM.2021.3073265]

22. Fan, H.; Su, H.; Guibas, L.J. A point set generation network for 3d object reconstruction from a single image. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Honolulu, HI, USA, 21–26 July 2017; pp. 605-613.

23. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell.; 1986; PAMI-8, pp. 679-698. [DOI: https://dx.doi.org/10.1109/TPAMI.1986.4767851]

24. Prewitt, J.M. Object enhancement and extraction. Pict. Process. Psychopictorics; 1970; 10, pp. 15-19.

25. Sobel, I.E. Camera Models and Machine Perception; Stanford University: Stanford, CA, USA, 1970.

26. Wang, X. Laplacian Operator-Based Edge Detectors. IEEE Trans. Pattern Anal. Mach. Intell.; 2007; 29, pp. 886-890. [DOI: https://dx.doi.org/10.1109/TPAMI.2007.1027]

27. Isar, A.; Nafornita, C.; Magu, G. Hyperanalytic wavelet-based robust edge detection. Remote. Sens.; 2021; 13, 2888. [DOI: https://dx.doi.org/10.3390/rs13152888]

28. Mallat, S.; Hwang, W.L. Singularity detection and processing with wavelets. IEEE Trans. Inf. Theory; 1992; 38, pp. 617-643. [DOI: https://dx.doi.org/10.1109/18.119727]

29. Shao, K.; Zou, Y. An image edge detection algorithm based on wavelet transform and mathematical morphology. Foundations of Intelligent Systems: Proceedings of the Eighth International Conference on Intelligent Systems and Knowledge Engineering, Shenzhen, China, Nov 2013 (ISKE 2013); Springer: Berlin/Heidelberg, Germany, 2014; pp. 485-495.

30. You, N.; Han, L.; Liu, Y.; Zhu, D.; Zuo, X.; Song, W. Research on Wavelet Transform Modulus Maxima and OTSU in Edge Detection. Appl. Sci.; 2023; 13, 4454. [DOI: https://dx.doi.org/10.3390/app13074454]

31. Mittal, M.; Verma, A.; Kaur, I.; Kaur, B.; Sharma, M.; Goyal, L.M.; Roy, S.; Kim, T.H. An efficient edge detection approach to provide better edge connectivity for image analysis. IEEE Access; 2019; 7, pp. 33240-33255. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2902579]

32. Tian, R.; Sun, G.; Liu, X.; Zheng, B. Sobel edge detection based on weighted nuclear norm minimization image denoising. Electronics; 2021; 10, 655. [DOI: https://dx.doi.org/10.3390/electronics10060655]

33. Lu, Q.H.; Zhang, X.M. Multiresolution edge detection in noisy images using wavelet transform. Proceedings of the 2005 International Conference on Machine Learning and Cybernetics; Guangzhou, China, 18–21 August 2005; Volume 8, pp. 5235-5240. [DOI: https://dx.doi.org/10.1109/ICMLC.2005.1527868]

34. Vikas, P.; Lakshmi, M.S.; Rajkumar, M.S.; Prasad, P. Edge detection in noisy images using wavelet transform. Proceedings of the 2015 National Conference on Recent Advances in Electronics & Computer Engineering (RAECE); Roorkee, India, 13–15 February 2015; pp. 36-39. [DOI: https://dx.doi.org/10.1109/RAECE.2015.7510222]

35. Dun, L.; Dong, Y. A Multi-scale Edge Detection Algorithm Based on Wavelet Transform. Proceedings of the 2012 Fifth International Conference on Intelligent Networks and Intelligent Systems; Tianjin, China, 1–3 November 2012; pp. 21-24. [DOI: https://dx.doi.org/10.1109/ICINIS.2012.35]

36. Mhamed, I.B.; Abid, S.; Fnaiech, F. Edge detection using wavelet transform and Neural Networks. Wulfenia J.; 2013; 20, pp. 196-205.

37. Gu, Y.; Lv, J.; Bo, J.; Zhao, B.; Zheng, K.; Zhao, Y.; Tao, J.; Qin, Y.; Wang, W.; Liang, J. An improved wavelet modulus algorithm based on fusion of light intensity and degree of polarization. Appl. Sci.; 2022; 12, 3558. [DOI: https://dx.doi.org/10.3390/app12073558]

38. Wang, T.; Xu, J. Balancing noise suppression and edge detail preservation in wavelet-based edge detection. Signal Process.; 2022; 190, 108281.

39. Shui, P.L.; Wang, F.P. Anti-impulse-noise edge detection via anisotropic morphological directional derivatives. IEEE Trans. Image Process.; 2017; 26, pp. 4962-4977. [DOI: https://dx.doi.org/10.1109/TIP.2017.2726190] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28715330]

40. Yin, Z.; Liu, Z.; Huang, M. An improved morphological edge detection algorithm. Proceedings of the 2022 3rd International Conference on Geology, Mapping and Remote Sensing (ICGMRS); Zhoushan, China, 22–24 April 2022; pp. 144-149.

41. Dollár, P.; Zitnick, C.L. Fast edge detection using structured forests. IEEE Trans. Pattern Anal. Mach. Intell.; 2014; 37, pp. 1558-1570. [DOI: https://dx.doi.org/10.1109/TPAMI.2014.2377715] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26352995]

42. Xie, S.; Tu, Z. Holistically-nested edge detection. Proceedings of the IEEE International Conference on Computer Vision; Santiago, Chile, 7–13 December 2015; pp. 1395-1403.

43. He, J.; Zhang, S.; Yang, M.; Shan, Y.; Huang, T. Bi-directional cascade network for perceptual edge detection. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; Long Beach, CA, USA, 15–20 June 2019; pp. 3828-3837.

44. Su, Z.; Liu, W.; Yu, Z.; Hu, D.; Liao, Q.; Tian, Q.; Pietikäinen, M.; Liu, L. Pixel difference networks for efficient edge detection. Proceedings of the IEEE/CVF International Conference on Computer Vision; Montreal, BC, Canada, 11–17 October 2021; pp. 5117-5127.

45. Zhou, C.; Huang, Y.; Pu, M.; Guan, Q.; Huang, L.; Ling, H. The treasure beneath multiple annotations: An uncertainty–aware edge detector. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; Vancouver, BC, Canada, 17–24 June 2023; pp. 15507-15517.

46. Liu, Y.; Cheng, M.M.; Hu, X.; Wang, K.; Bai, X. Richer Convolutional Features for Edge Detection. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Honolulu, HI, USA, 21–26 July 2017; pp. 5872-5881. [DOI: https://dx.doi.org/10.1109/CVPR.2017.622]

47. Yousaf, R.M.; Habib, H.A.; Dawood, H.; Shafiq, S. A Comparative Study of Various Edge Detection Methods. Proceedings of the 2018 14th International Conference on Computational Intelligence and Security (CIS); Hangzhou, China, 16–19 November 2018; pp. 96-99. [DOI: https://dx.doi.org/10.1109/CIS2018.2018.00029]

48. Wang, S.; Zhou, L.; He, P.; Quan, D.; Zhao, Q.; Liang, X.; Hou, B. An Improved Fully Convolutional Network for Learning Rich Building Features. Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium; Yokohama, Japan, 28 July–2 August 2019; pp. 6444-6447. [DOI: https://dx.doi.org/10.1109/IGARSS.2019.8898460]

49. Cui, J.; Tian, K. Edge Detection Algorithm Optimization and Simulation Based on Machine Learning Method and Image Depth Information. IEEE Sensors J.; 2020; 20, pp. 11770-11777. [DOI: https://dx.doi.org/10.1109/JSEN.2019.2936117]

50. Fan, C.; Wang, X.; Qiu, T. Edge Detection via Fusion Difference Convolution. Sensors; 2023; 23, 6883. [DOI: https://dx.doi.org/10.3390/s23156883] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37571663]

51. Chen, J.; Song, Y.; Li, D.; Lin, X.; Zhou, S.; Xu, W. Specular Removal of Industrial Metal Objects Without Changing Lighting Configuration. IEEE Trans. Ind. Inform.; 2024; 20, pp. 3144-3153. [DOI: https://dx.doi.org/10.1109/TII.2023.3297613]

52. Cheng, D.; Chen, L.; Lv, C.; Guo, L.; Kou, Q. Light-Guided and Cross-Fusion U-Net for Anti-Illumination Image Super-Resolution. IEEE Trans. Circuits Syst. Video Technol.; 2022; 32, pp. 8436-8449. [DOI: https://dx.doi.org/10.1109/TCSVT.2022.3194169]

53. Xia, J.; Yang, Z.; Li, S.; Zhang, S.; Fu, Y.; Gündüz, D.; Li, X. Blind Super-Resolution via Meta-Learning and Markov Chain Monte Carlo Simulation. IEEE Trans. Pattern Anal. Mach. Intell.; 2024; 46, pp. 8139-8156. [DOI: https://dx.doi.org/10.1109/TPAMI.2024.3400041]

54. Yu, S.; Guan, D.; Gu, Z.; Guo, J.; Liu, Z.; Liu, Y. Radar Target Complex High-Resolution Range Profile Modulation by External Time Coding Metasurface. IEEE Trans. Microw. Theory Tech.; 2024; 72, pp. 6083-6093. [DOI: https://dx.doi.org/10.1109/TMTT.2024.3385421]

55. Zhou, H.; Li, X.; Zhao, S. Uncertainty–aware edge detection: Modeling ambiguity in multiple annotations. IEEE Trans. Image Process.; 2022; 31, pp. 4457-4470.

56. Cetinkaya, M.; Dogan, A. RankED: A ranking-based edge detection framework addressing class imbalance and label uncertainty. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); Vancouver, BC, Canada, 17–24 June 2023; pp. 2190-2198.

57. Khmag, A.; Kamarudin, N. Natural image deblurring using recursive deep convolutional neural network (R-DbCNN) and second-generation wavelets. Proceedings of the 2019 IEEE International Conference on Signal and Image Processing Applications (ICSIPA); Kuala Lumpur, Malaysia, 17–19 September 2019.

58. Khmag, A. Additive Gaussian noise removal based on generative adversarial network model and semi-soft thresholding approach. Multimed. Tools Appl.; 2023; 82, pp. 7757-7777. [DOI: https://dx.doi.org/10.1007/s11042-022-13569-6]

59. Khmag, A. Natural digital image mixed noise removal using regularization Perona–Malik model and pulse coupled neural networks. Soft Comput.; 2023; 27, pp. 15523-15532. [DOI: https://dx.doi.org/10.1007/s00500-023-09148-y]

60. Chui, C.K. An Introduction to Wavelets; Elsevier: Amsterdam, The Netherlands, 2016.

61. Chen, J.; Cai, Z. A new class of explicit interpolatory splines and related measurement estimation. IEEE Trans. Signal Process.; 2020; 68, pp. 2799-2813. [DOI: https://dx.doi.org/10.1109/TSP.2020.2984477]

62. Graps, A. An introduction to wavelets. IEEE Comput. Sci. Eng.; 1995; 2, pp. 50-61. [DOI: https://dx.doi.org/10.1109/99.388960]

63. Arbelaez, P.; Maire, M.; Fowlkes, C.; Malik, J. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell.; 2010; 33, pp. 898-916. [DOI: https://dx.doi.org/10.1109/TPAMI.2010.161]

64. Sheikh, H.R.; Bovik, A.C. Image information and visual quality. IEEE Trans. Image Process.; 2006; 15, pp. 430-444. [DOI: https://dx.doi.org/10.1109/TIP.2005.859378]

65. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process.; 2012; 21, pp. 4695-4708. [DOI: https://dx.doi.org/10.1109/TIP.2012.2214050] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/22910118]

© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.