Content area

Abstract

In light of the computational efficiency bottleneck and inadequate regional feature representation in traditional global data approximation methods, this paper introduces the concept of non-uniform partition to transform global continuous approximation into multi-region piecewise approximation. Additionally, we propose an image representation algorithm based on linear canonical transformation and non-uniform partitioning, which enables the regional representation of sub-signal features while reducing computational complexity. The algorithm first demonstrates that the two-dimensional linear canonical transformation series has a least squares solution within each region. Then, it adopts the maximum likelihood estimation method and the scale transformation characteristics to achieve conversion between the nonlinear and linear expressions of the two-dimensional linear canonical transformation series. It then uses the least squares method and the recursive method to convert the image information into mathematical expressions, realize image vectorization, and solve the approximation coefficients in each region more quickly. The proposed algorithm better represents complex image texture areas while reducing image quality loss, effectively retains high-frequency details, and improves the quality of reconstructed images.

Full text

Turn on search term navigation

1. Introduction

Theoretical tools such as transformations play a crucial role in engineering applications, particularly in signal processing. Key transformations include Fourier transform, fractional Fourier transform, and linear canonical transform [1,2]. These tools are widely utilized in signal processing and are essential in various fields, including optics, image processing, and pattern recognition.However, with the continuous improvement and development of technology and theory, the analysis and processing of two-dimensional signals have gradually increased. Simple one-dimensional signal processing tools are not fully applicable in some cases. For instance, in image processing, the two-dimensional signal must be decomposed into multiple independent, one-dimensional components for processing, resulting in spatial correlation destruction and feature information loss. In addition, the freedom of the one-dimensional linear canonical transformation parameter is insufficient. One-dimensional linear canonical transformation contains only four free parameters, which makes it difficult to characterize the nonlinear modulation characteristics of complex multidimensional signals. Consequently, more and more generalized forms of these two-dimensional transformations have emerged [3,4]. The two-dimensional linear canonical transform (2D LCT) is a generalized linear canonical transform with more flexible six degrees of freedom, and it has become one of the core tools of modern signal processing.

A theoretical framework of two-dimensional linear regular transformation has been developed in recent years. High-order transformation forms have emerged by integrating two-dimensional linear regular transformations with quaternions, octonions, and hyperbolic geometric structures, and these have been applied to image processing, multidimensional signal analysis, and other fields, demonstrating powerful effects and potential. The two-dimensional linear canonical transform has garnered significant scholarly attention as a more generalized form of the two-dimensional Fourier transform and the two-dimensional fractional Fourier transform. In terms of the theoretical extension and mathematical structure of LCT, in LCT under a high-order algebraic structure, the quaternion linear canonical transform (QLCT) [5] and the octonion linear canonical transform (OLCT) expand the application of LCT in the processing of multidimensional signals (such as color images and three-dimensional signals) by introducing hypercomplex structures. For example, Jiang [6] derived the differential properties and convolution theorem of the left-hand octonion linear canonical transform (LOCLCT) and used the properties and the corresponding convolution theorem to discuss and analyze three-dimensional linear time-invariant systems, highlighting its enhanced flexibility and multi-scale analysis capabilities. Dar et al. [7] proposed Wigner distribution (WDOL) in the OLCT domain, realized the efficient characterization of a high-dimensional signal phase space by combining octonion algebra, and formulated the generalized form of the Heisenberg uncertainty principle. The QLCT performs well in quaternion signal processing. Prasad et al. [8] studied the characterization range, reproducing kernel, one-to-one mapping, Donoho–Stark inequality, and Pitt inequality based on quaternion, windowed, linear canonical transformation. Some useful uncertainty principles were discussed, such as the Heisenberg, Riebre, and local uncertainty principles. Hu et al. [9] implemented the QLCT of the convolution of two quaternion functions and derived related operators and theorems, which were used in the design of multiplication filters through the product of their QLCTs, or the sum of the products of their QLCTs, and solved the problem of the Fredholm integral equation with special kernels. Kit Ian KOU [10] studied the principles of hypercomplex signals in the linear canonical transform domain and proposed the Plancherel theorem of quaternion Hilbert transform related to linear canonical transform. To detect single-component and two-component linear frequency modulation signals, Bhat et al. [11] introduced scaled Wigner distribution on a biased linear canonical domain (SWDOLC), which expanded the application range of Wigner distribution. Introducing the offset linear canonical transform (OLCT) and the scaling factor k, it has greater flexibility on the frequency axis and can effectively reduce the cross terms. This provides a new tool for the detection of linear frequency modulation signals and improves the accuracy and reliability of signal detection. For graph signal processing and data compression, Chen et al. [12] proposed the graph linear canonical transform (GLCT), which overcomes the limitation of capturing local information through parameter decomposition of fractional Fourier transform, scale transform, and chirp modulation and verifies the effectiveness of its filter design in image classification tasks. Ravi [13] implemented nonlinear dual image encryption using inseparable, two-dimensional linear canonical transformation. Li [14] proposed a graph linear canonical transform (GLCT) based on linear frequency modulation multiplication–linear frequency modulation convolution–linear frequency modulation multiplication decomposition (CM-CC-CM-GLCT), which provides a new tool for the field of graph signal processing. Through CM-CC-CM-GLCT decomposition, it achieves lower computational complexity, similar additivity, and better reversibility. The effectiveness and superiority of CM-CC-CM-GLCT in data compression were verified through theoretical analysis and simulation. In order to improve the performance of graph signal processing, Zhang et al. [15] proposed an uncertainty principle based on GLCT and used this principle to establish the conditions for recovering GLCT band-limited signals from sample subsets. Subsequently, they constructed the sampling theory of GLCT and explored the relationship between the uncertainty principle and sampling. Li [16] introduced a two-dimensional quaternion linear canonical series to describe color images and analyzed the ability of two-dimensional quaternion linear canonical sequences in image representation and reconstruction. Furthermore, this method studied the relationship between the two-dimensional quaternion linear canonical transformation series and the two-dimensional linear canonical transformation series in detail. Pankaj Rakheja et al. [17] proposed a dual image encryption mechanism that combines a three-dimensional Lorentz chaotic system and QR decomposition in a two-dimensional inseparable linear canonical transform domain. The simulation results show that the scheme has strong robustness against occlusion and special attacks. Wang et al. [18] proposed an innovative zero-watermarking method specifically designed for stereo images, utilizing an accurate ternary polar linear canonical transform. It introduced a precise polar linear canonical transform to solve the numerical integration issues associated with the polar linear canonical transform. Experiments show that this method offers improved performance and greater robustness. Liu et al. [19] proposed the discrete quaternion offset linear canonical transform using quaternion algebra and presented a new image encryption scheme that combines double random phase encoding with the generalized Arnold transform.

From the existing literature, the two-dimensional linear canonical transform and its extended forms integrate high-order algebra, a geometric structure, and modern signal processing requirements, which continuously drive technological innovation in time–frequency analysis, image processing, and graph signal processing and have important academic value and application prospects. However, the development of two-dimensional linear canonical transformation theory is facing two dilemmas. One aspect is the obstacle of computational complexity. Existing linear canonical transform algorithms reconstruct image pixels by global approximation, which requires an iterative solution, and the data and computational time increase superlinearly. On the other hand, fundamental frameworks, such as the two-dimensional linear canonical transformation series theory, have not been systematically established, and there is a gap in the theoretical system. To solve the double dilemma of two-dimensional linear canonical transform, fill in the structural gap in multidimensional transformation theory, construct a theoretical system of the two-dimensional LCT series, develop a new fast algorithm framework, and solve the mutually exclusive problem between computational complexity and accuracy, this paper introduces the concept of non-uniform partition to convert global data approximation into approximation operations under different partitions. First, we conduct mathematical derivation and logical reasoning on the two-dimensional linear canonical transformation series, along with proof by contradiction to demonstrate that it has a unique least squares solution within the signal sub-region. To realize the conversion between the nonlinear and linear expressions of the two-dimensional linear canonical transformation series, we utilize mathematical methods, such as maximum likelihood estimation and the properties of scale transformation. Furthermore, a fast linear canonical transformation algorithm is constructed to improve the computational efficiency. Second, for complex texture areas and edge features of images, the non-uniform partition method and two-dimensional linear canonical transformation series are used to represent the image, and the least squares method is employed to convert the image information into a mathematical expression, achieving image vectorization. Finally, an adaptive two-dimensional linear canonical transform partition algorithm is constructed to represent the complex texture areas of the image more effectively, retain the texture details, and improve the quality of the reconstructed image. The proposed algorithm can enhance the quality of the non-uniform partition while reducing the program execution time and image quality loss.

In summary, the contributions of this paper are highlighted as follows:

To solve the computational efficiency bottleneck and insufficient regional feature representation in traditional global data approximation methods, this paper proposes an adaptive non-uniform partition algorithm based on the two-dimensional linear canonical transformation series. Its mathematical essence can be extended to the approximation problem of linear canonical transformation series on partition sub-regions. Under this framework, the existence and uniqueness of the least squares solution for the linear canonical transformation series in each sub-region is proved, which is not only the theoretical basis for ensuring the mathematical completeness of the image representation model but also the core scientific proposition for achieving stable convergence of the algorithm.

This paper introduces the concept of non-uniform partition to convert global data approximation into an approximation operation under different partitions. In order to speed up its operation, different transformation coefficients are employed to represent the sub-signals in different regions. Concurrently, the least-squares and maximum likelihood estimation methods are applied to swiftly determine the approximation coefficients for each area. The proposed algorithm aims to improve the quality of reconstructed images while minimizing program execution time and image quality loss.

The rest of this paper is organized as follows: Section 2 reviews some preliminaries of the 2D linear canonical transform and adaptive non-uniform image partition. Section 3 formulates an adaptive non-uniform partition algorithm based on 2D-LCT and presents some examples to illustrate how to partition the image flexibly using this algorithm. Section 4 demonstrates the effectiveness of the proposed 2D-LCT-based partition algorithm for image representation through simulation experiments. Finally, conclusions are presented in Section 5.

2. Preliminaries

In this section, we briefly review some of the knowledge acquired in the remaining parts of this paper.

2.1. The Mathematics Form for 2D LCT

First, we introduce the Theorem of the 2-D linear canonical transform. The two-dimensional linear canonical transform with parameter matrices has eight parameters. Since the two-parameter matrices are orthogonal, the two-dimensional linear canonical transform has six free parameters. The two-dimensional Fourier transform, the two-dimensional fractional Fourier transform, and the two-dimensional Fresnel transform are all special forms of the two-dimensional linear canonical transform. When A=B=cosθsinθsinθcosθ, the two-dimensional fractional Fourier transform can be derived from the two-dimensional linear canonical transform. And when θ=pi/2, the two-dimensional Fourier transform can be reduced from the two-dimensional fractional Fourier transform.

Theorem 1.

Assume that the parameter matrix A=a1b1c1d1, a1,b1,c1,d1R,b10,a1d1b1c1=1. B=a2b2c2d2, a2,b2,c2,d2R,b20,a2d2b2c2=1. Then, the linear canonical transform of a image or function f(t) with parameter matrix A is defined as

(1) F A , B [ f ( x , y ) ] ( u , v ) = F f A , B ( u , v ) = + + f ( x , y ) K A , B ( u , v , x , y ) d x d y , b 1 b 2 0 d 1 d 2 e i ( c 1 d 1 ) u 2 + c 2 d 2 ) v 2 / 2 f ( d 1 u , d 2 v ) , b 1 2 + b 2 2 = 0 ,

where

K A , B ( u , v , x , y ) = K A ( u , x ) K B ( v , y ) K A ( u , x ) = C A exp ( i a 1 x 2 2 b 1 i u x b 1 + i d 1 u 2 2 b 1 ) , C A = 1 i 2 π b 1 . K B ( v , y ) = C B exp ( i a 2 y 2 2 b 2 i v y b 2 + i d 2 v 2 2 b 2 ) , C B = 1 i 2 π b 2 .

When the parameter matrix A=0110 and B=0110, the above equation is reduced to a two-dimensional Fourier transform as follows:

(2)F(u,v)=++f(x,y)exp(i2π(ux+vy))dxdy.

When the parameter matrix A=cosαsinαsinαcosα and B=cosβsinβsinβcosβ, Equation (1) is transformed into a two-dimensional fractional Fourier transform as follows:

(3)FA,B[f(x,y)](u,v)=FfA,B(u,v)=++f(x,y)KA,B(u,v,x,y)dxdy,

with

KA,B(u,v,x,y)=(1icotα)(1icotβ)/2πexp[i(x2+u2)/2tanαixu/sinα]exp[i(y2+v2)/2tanβiyv/sinβ].

where α,β are expressed as the rotation angles of the two-dimensional fractional Fourier transform.

2.2. The Property for 2D LCT

The application of two-dimensional linear canonical transform in image processing requires corresponding research on some vital properties of two-dimensional linear canonical transform, such as time-shift modulation, scale feature, superposition, reversibility, etc. In this subsection, we derive and prove the scale feature and time–frequency shift properties of the two-dimensional linear canonical transform based on the inferential proofs related to the one-dimensional linear canonical transform.

Property 1

(Time-shift modulation). Let g(x,y)=f(xτ1,yτ2),τ1,τ2R. Then, FgA,B(U,V)=e(uc1τ1+vc2τ2(i/2)(a1c1τ12+a2c2τ22)FA,B(ua1τ1,va2τ2).

Proof of Property 1.

According to (1),

FgA,B(U,V)=1i2πb11i2πb2++f(xτ1,yτ2)exp(ia1(xτ1)22b1iu(xτ1)b1+id1u22b1)exp(ia2(yτ2)22b2iv(yτ2)b2+id2v22b2)d(xτ1)d(yτ2)=exp(uc1τ1ia1c1τ122)exp(vc2τ2ia2c2τ222)1i2πb11i2πb2++f(xτ1,yτ2)exp(ia1x22b1i(ua1τ1)xb1+id1(ua1τ1)22b1)exp(ia2y22b2i(va2τ2)yb2+id2(va2τ2)22b2)d(ua1τ1)d(va2τ2)=exp(uc1τ1+vc2τ2(i/2)(a1c1τ12+a2c2τ22)FA,B(ua1τ1,va2τ2).

Property 2

(Time–frequency shift). Let g(x,y)=ei(xu0+yv0)f(xτ1,yτ1),τ1,τ2,u0,v0R, then FgA,B(u,v)=ei(c1τ1+d1u0)u+i(c2τ2+d2u0)vi(b1c1τ1u0+b2c2τ2u0)e(i/2)(a1c1τ12+a2c2τ22+b1d1u02+b2d2v02)FA,B(uu0b1a1τ1,vv0b2a2τ2).

Property 2 can be obtained by the same method as above, and will not be proved here.

Property 3

(Scale feature). Let g(x,y)=f(αx,βy), then FgA,B(U,V)(u,v)=αβFAB(f(x,y))(u,v),α,βR, where A=(αa1,b1/α,αc1,d1/α),B=((αa2,b2/α,αc2,d2/α)).

Proof of Property 3.

We substitute g(x,y)=f(αx,βy) into the Theorem of the two-dimensional linear canonical transform and obtain

FgA,B(U,V)=1i2πb11i2πb2++f(αx,βy)exp(ia1(αx)22b1iu(αx)b1+id1u22b1)exp(ia2(βy)22b2iv(βy)b2+id2v22b2)d(αx)d(βy)=αβ1i2πb1/α1i2πb2/β++f(x,y)exp(iαa1x22b1/αiuxb1/α+i(d1/α)u22b1/α)exp(iβa2y22b2/βivyb2/β+i(d2/β)v22b2/β)dxdy=αβFAB(f(x,y))(u,v).

2.3. 2D LCT Series

The Fourier series is the basis of modern Fourier analysis and transformation. It is also one of the core concepts in modern signal processing theory. In this subsection, the concept of a one-dimensional linear canonical transformation series based on the characteristics of the Fourier series and the fractional Fourier sequence is put forward.

Theorem 2.

Assume that the parameter matrix A=abcd, a,b,c,dR,b0, adbc=1. If the signal f(t),t[T/2,T/2] is a finite-length signal, then the signal f(t) can be expanded into the linear canonical transformation series below:

(4) f ( t ) = n = C A , n i T exp ( i d 2 b n 2 π b T 2 + i t b n 2 π b T i a 2 b t 2 ) ,

where CA,n is the expansion coefficent of the linear canonical series

C A , n = T / 2 T / 2 f ( t ) i T exp i d 2 b n 2 π b T 2 i t b n 2 π b T + i a 2 b t 2 d t .

This paper starts with the Fourier transform and the fractional Fourier transform, proposes the Theorem of two-dimensional linear canonical transform series in combination with the Theorem of one-dimensional linear canonical transform series, and the two-dimensional linear canonical transform is equally orthogonal to the one-dimensional linear canonical transform. Next, we introduce the concept of the 2-D linear canonical transform series.

Theorem 3.

Assume that the parameter matrix A=a1b1c1d1, a1,b1,c1,d1R,b10,a1d1b1c1=1. B=a2b2c2d2, a2,b2,c2,d2R,b20,a2d2b2c2=1. Then, the linear canonical transformation series is expressed as

(5) f ( x , y ) = m , n C m , n A , B 1 T 1 T 2 exp i 2 b 1 d 1 m 2 π b 1 T 1 2 2 x m 2 π b 1 T 1 + a 1 x 2 exp i 2 b 2 d 2 n 2 π b 2 T 2 2 2 y n 2 π b 2 T 2 + a 2 y 2 ,

where Cm,nA,B is the expansion coefficients of the linear canonical series

C m , n A , B = T 1 2 T 1 2 T 2 2 T 2 2 f ( x , y ) 1 T 1 T 2 exp i 2 b 1 d 1 m 2 π b 1 T 1 2 2 x m 2 π b 1 T 1 + a 1 x 2 exp i 2 b 2 d 2 n 2 π b 2 T 2 2 2 y n 2 π b 2 T 2 + a 2 y 2 d x d y .

2.4. Adaptive Non-Uniform Partition Algorithm of Image

The adaptive non-uniform partition of the image is performed by least-squares fitting based on the image’s gray values. The non-uniform partition algorithm mainly includes the rectangular partition [20] and the triangular partition [21]. Both the non-uniform rectangular partition algorithm and the non-uniform triangular partition algorithm are classic methods. The adaptive triangular partition algorithm [22] is an improvement on the non-uniform triangular partition. This algorithm establishes a separation partition model and uses a fitting function to solve it. Additionally, it can reduce redundant calculations and preserve image quality by removing shared edges and vertices between adjacent triangles in the partition grid. However, compared with the rectangular partition, the rectangular partition has certain advantages in running time and the number of sub-regions.

For a given image, the process of the non-uniform partition is performed as follows: First, we set the sub-region, denoted as Gm, starting from the initial partition. Gm contains a larger number of pixels of known data, labeled Zi (where i0,n1), n is the number of pixels on Gm, p is the set of undetermined coefficients in bi-variate polynomial fm(P), and Qi is the set of determined coefficients after applying least squares approximation [23]. fmQi is the reconstructed pixel set in the current regions.

(6)e=i=0n1fmQizi2<ε.

The partition process stops when (6) occurs; this means that the reconstruction error e is less than the set control error ε. Otherwise, the Gm is unqualified, and the Gmnn=0,1,2,3 will be checked one by one in the same way. The qualified fmnP are recorded when e<ε until the whole process finishes. The nonuniform partition of images is one kind of image vectorization [24] that employs fractional polynomials to approximate the pixel values in subregions. Different partition accuracies have a direct impact on the quality of reconstruction. Figure 1 indicates the reconstruction effect of the image after nonuniform partition under different error accuracies.

Most non-uniform partition algorithms currently rely on bivariate polynomials to approximate the pixels in a region using least-squares fitting. However, the original non-uniform partition algorithm uses linear bivariate polynomials, which may not effectively approximate complex textures.

3. Method and Algorithm Analysis

In this section, we first prove that the two-dimensional linear canonical transform has a unique least squares solution in the selected region to find the best matching function of the data by minimizing the sum of squared errors. Next, we use the least squares method to find the functional relationship between the independent variable and the dependent variable, also aiming to minimize the sum of squared errors. Finally, a fast two-dimensional linear canonical transform algorithm is constructed to reconstruct the image.

3.1. Theory and Design

Based on the theory of two-dimensional linear canonical transform series, this paper investigates fast algorithms of two-dimensional linear canonical transform and image representation, reduces the computational complexity of image pixel approximation and reconstruction by linear canonical transform, and improves the adaptability of image representation in complex texture regions. As an extension of the two-dimensional Fourier transform, the two-dimensional linear canonical transform also shows many advantages in image processing due to its six free parameters. Inspired by the signal reconstruction method in [25], this paper uses two-dimensional LCT series as nonuniform decomposition polynomials to reconstruct the image.

Theorem 4.

For any finite function f(x,y), its 2-D LCTS expansion with parameters A=a1b1c1d1 and B=a2b2c2d2 is expressed as

(7) f ( x , y ) = m , n C m , n A , B 1 T 1 T 2 exp i 2 b 1 d 1 m 2 π b 1 T 1 2 2 x m 2 π b 1 T 1 + a 1 x 2 exp i 2 b 2 d 2 n 2 π b 2 T 2 2 2 y n 2 π b 2 T 2 + a 2 y 2 ,

with the 2-D LCT expansion coefficients Cm,nA,B defined as

C m , n A , B = T 1 2 T 1 2 T 2 2 T 2 2 f ( x , y ) 1 T 1 T 2 exp i 2 b 1 d 1 m 2 π b 1 T 1 2 2 x m 2 π b 1 T 1 + a 1 x 2 exp i 2 b 2 d 2 n 2 π b 2 T 2 2 2 y n 2 π b 2 T 2 + a 2 y 2 d x d y .

where (x,y)[T12,T12][T22,T22].

Proof of Theorem 4.

The orthogonality of the 2D-LCT series has been proven in the literature [26], and then we solve the expression of CA,n.

Multiplying both sides of the equation by 1T1T2expi2b1d1m2πb1T122xm2πb1T1+a1x2expi2b2d2n2πb2T222yn2πb2T2+a2y2 and taking the integral, we have

T12T12T22T22f(x,y)(1T1T2)expi2b1d1m2πb1T122xm2πb1T1+a1x2expi2b2d2n2πb2T222yn2πb2T2+a2y2dxdy=T12T12T22T22m,nCm,nA,B1T1T2(1T1T2)expi2b1d1m2πb1T122xm2πb1T1+a1x2expi2b2d2n2πb2T222yn2πb2T2+a2y2expi2b1d1m2πb1T122xm2πb1T1+a1x2expi2b2d2n2πb2T222yn2πb2T2+a2y2dxdy.

When m=m,n=n, after expanding the formula on the right side of the equation, we obtain

(8)1T1T2T12T12T22T22f(x,y)expi2b1d1m2πb1T122xm2πb1T1+a1x2expi2b2d2n2πb2T222yn2πb2T2+a2y2dxdy=T12T12T22T22m,nCm,nA,Bdxdy,

from (8); the solution is

(9)Cm,nA,B=T12T12T22T22f(x,y)1T1T2expi2b1d1m2πb1T122xm2πb1T1+a1x2expi2b2d2n2πb2T222yn2πb2T2+a2y2dxdy.

When mm,nn, after expanding the formula on the right side of the equation, we have

(10)T12T12T22T22m,nCm,nA,B1T1T2expi2b1d1m2πb1T122xm2πb1T1+a1x2expi2b2d2n2πb2T222yn2πb2T2+a2y2expi2b1d1m2πb1T122xm2πb1T1+a1x2expi2b2d2n2πb2T222yn2πb2T2+a2y2dxdy=0.

In the one-dimensional linear canonical transform, we have demonstrated that approximation to the regional signal with the 1-D LCT series has a unique least squares solution. Similarly, it can be inferred that the approximation to regional image pixels with the 2-D LCT series also has a unique least squares solution.

Likewise, suppose L,P is the number of terms in the 2-D LCT series expansion of the image f(x,y), and the expansion is defined as the finite length LCT series of the image, expressed as

(11)fL,P(x,y)=m=1,n=1L,PCm,nA,B1T1T2expi2b1d1m2πb1T122xm2πb1T1+a1x2expi2b2d2n2πb2T222yn2πb2T2+a2y2.

Assuming that the value of the function of the image at (xi,yj) is the same as f, it is transformed to

(12)fL,P(x,y)=m=1,n=1L,PCm,nA,B1T1T2expi2b1d1m2πb1T122xim2πb1T1+a1xi2expj2b2d2n2πb2T222yjn2πb2T2+a2yj2,

where i=1,2,,M,j=1,2,,N. Decompose f(x,y) into the product of three independent parts and define the matrices U, V, and O. U is a matrix M×L, where the elements are

(13)U=exp(i2b12xim2πb1T1+a1xi2),

V is a N×P matrix, where the elements are

(14)V=exp(i2b22yjn2πb2T2+a2yj2),

ORL×P and O is

(15)O=Cm,nA,B1T1T2expid12b1m2πb1T12expid22b2n2πb2T22.

Perform Kronecker product operation on U and V, and the final four-dimensional tensor f(x,y)RM×N×L×P is combined by outer product and element-by-element multiplication to obtain the final matrix expression as follows:

(16)f(x,y)=(UV)(1M×NO).

Here, ⊗ represents the Kronecker product, which results in a (M·N)(L·P) matrix with elements

(17)UV=exp(i2b12xim2πb1T1+a1xi2)exp(i2b22yjn2πb2T2+a2yj2),

and ⊙ represents the multiplication of elements by element; 1M×N is a full 1-matrix, which is used to match dimensions.

The matrix representation of f(x,y) is

(18)f(x,y)=exp(i2b12xim2πb1T1+a1xi2)exp(i2b22yjn2πb2T2+a2yj2)Cm,nA,B1T1T2expid12b1m2πb1T12expid22b2n2πb2T22.

Rewrite (18) as the form

(19)X1000X2000XMY11Y1LYM1YMLX1000X2000XNY11Y1LYN1YNLCm,nA,BZM,N=fM.N,

with

X i = exp ( i a 1 2 b 1 x i 2 ) , Y i m = exp ( 2 m π i x i T 1 ) , X j = exp ( i a 2 2 b 2 y j 2 ) , Y j n = exp ( 2 n π j y j T 2 ) , O n = C m , n A , B Z M , N = C m , n A , B 1 T 1 T 2 exp ( 2 d 1 b 1 i m 2 π 2 T 1 2 + 2 d 2 b 2 i n 2 π 2 T 2 2 ) .

The above matrix form can be abbreviated as

(20)XYXYO=f.

Then, the problem is transformed into the following optimization model by using the following idea of the least squares method:

(21)minfUVO2.

Theorem 5.

When (M×N)(L×P), the column vectors of matrix B are linearly independent.

Proof of Theorem 5.

The matrix of B has the form

(22)B=XYXY=X1Y11X1Y11X1Y11X1Y1PX1Y1LX1Y11X1Y1LX1Y1PX1Y11X2Y21X1Y11X2Y2PX1Y1LX2Y22X1Y1LX2Y2PX1Y11XNYN1X1Y11XNYNPX1Y1LXNYN1X1Y1LXNYNPXMYM1X1Y11XMYM1X1Y1PXMYMLX1Y11XMYMLX1Y1PXMYM1X2Y21XMYM1X2Y2PXMYMLX2Y21XMYMLX2Y2PXMYM1XNYN1XMYM1XNYNPXMYMLXNYN1XMYMLXNYNP,

according to (22); the matrix of B is transformed to

β0=X1Y11X1Y11X1Y11X2Y21X1Y11XNYN1XMYM1X1Y11XMYM1X2Y21XMYM1XNYN1,β1=X1Y11X1Y12X1Y11X2Y22X1Y11XNYN2XMYM1X1Y12XMYM1X2Y22XMYM1XNYN2,,βL×P=X1Y1LX1Y1PX1Y1LX2Y2PX1Y1LXNYNPXMYMLX1Y1PXMYMLX2Y2PXMYMLXNYNP.

assuming that β0,β1,βL×P are linear correlation vectors, then

(23)s0β0+s1β1++sL×PβL×P,

and suppose that the first non-zero value among k0,k1,kL×P is kq, that is k0=k1=k2==kq1=0,kq0,q1. Equation (23) is transformed to

(24)B=s0β0+s1β1++sqβq++sL×PβL×P+kqβq+kq+1βq+1++kL×PβL×P=s0β0+s1β1++sq+kqβq+sq+1+kq+1βq+1+sL×P+kL×PβL×P.

According to (24), due to sq+kqsq, this contradicts the notion that B can be unique linear representation by the vectors β0,β1,βL×P. Therefore, we have contradicted our assumption that β0,β1,βL×P are linear correlation vectors. This indicates that the column vectors of the matrix B are irrelevant.

If the columns of B are linearly independent, when M+NL+P, the equations of XYXYO=f is a over-determined system, and the matrix B=XYXY has a unique Moore–Penrose [27] generalized inverse matrix B+, so that the equation has a unique least squares solution, as follows:

(25)O=B+f,

where B+=(BHB)1BH is the Moore–Penrose generalized inverse matrix of B, and BH is the complex conjugate transpose matrix of B.

Next, the identity deformation of the approximation function is obtained by the maximum likelihood estimation method. Since the logarithmic function is monotonic, the logarithmic transformation does not affect the extreme points; it does not change the positions of the extreme points. The logarithmic transformation can also reduce or eliminate the influence of extreme values in the data and improve the numerical stability of the calculation. We usually choose the logarithmic likelihood function in the maximum likelihood estimation (MLE) [28]. Therefore, the logarithm operation is adopted here for the two-dimensional linear canonical transformation series.

When L=1,P=1, the 2-D LCT-based adaptive non-uniform algorithm has a unique least squares solution in the selected region. Taking logarithms on both sides of Equation (11), then (11) is simplified as

(26)lnf(x,y)=ln1T1T2C1,1A,B+i2b1d12πb1T122x2πb1T1+a1x2+i2b2d22πb2T222y2πb2T2+a2y2.

According to the scaling invariance of 2-D LCT [29], we obtain

(27)lnf=ln1T1T2C1,1A,B+i2π2b1d1T12i2πkT1x+ia1k22b1x2+i2π2b2d2T22i2πqT2y+ia2q22b2y2,

Equation (27) is converted into

(28)lnf=Ax2+By2+Cx+Dy+E,

with

(29)A=ia1k22b1B=ia2q22b2C=i2πkT1D=i2πqT2E=ln1T1,T2C1,1A,B+i2π2b1d1T12+i2π2b2d2T22

in this paper. S is the least square solution to minimize the sum of squared residuals.

(30)S=min(Ax2+By2+Cx+Dy+Elnf)2.

After solving the partial derivative for each variable, and setting it to zero, the system of equations can be represented as

(31)2Ax2+By2+Cx+Dy+Elnf)x2=0,2Ax2+By2+Cx+Dy+Elnf)y2=0,2Ax2+By2+Cx+Dy+Elnf)x=0,2Ax2+By2+Cx+Dy+Elnf)y=0,2Ax2+By2+Cx+Dy+Elnf)1=0.

To solve (31), the system of linear equations is obtained as follows:

(32)A=x4x2y2x3yx3x2yx2x2y2y4xy3xy2y3y2x2yy3xy2xyy2yx2y2xyxy1

(33)B=x2lnfy2lnfxlnfylnflnfC=abcde.

The parameters of C can be obtained by solving the linear equation.

The 2-D LCT series is used to approximate pixels according to the least squares method. Supposing Zi(i0,n1) is used to represent the gray value of the pixel in the image, n is the number of all pixels in the region. Equation (21) can be rewritten as

(34)e=i=0n1UiViOiZi2.

Assuming that the global control error is set as ε, Zi is UiViOi, (34) is transformed as follows:

(35)e=i=0n1ZiZi2<ε.

3.2. Algorithm Description

In this section, we present the detailed steps of the 2-D LCT-based adaptive non-uniform algorithm and perform a recursive partitioning of the initial region based on a predetermined condition (e<ε). Initially, we take an image set, denoted as G, and divide it into four small rectangular sub-regions. For each rectangular sub-region, if it meets the preset control error, the partition will be stopped. If the rectangular sub-region does not meet the control error condition, it will be further divided into four smaller rectangular sub-regions using the self-similarity rule. This process continues until the reconstructed grayscale value in the current area reaches the termination condition; i.e., when the sum of the squared residual of current region calculated by the least-square fitting satisfies e=i=0n1ZiZi2<ε, this region stops dividing further and saves encoded data after a finite number of steps.

To explain its process, the algorithm is described in detail below.

1. For the initial image, G is the rectangular region, and Gm is the sub-region. Regarding G as the initial region, it will be divided into four sub-rectangular regions—G0,G1,G2,G3. The region pixels of an image can be represented by the two-dimensional LCT series Z=f(x,y),(x,y)G. x and y are the rows and columns of the image pixel matrix. Z is the grayscale value set of the regional pixels. The pixel contained in Gm is data known as Zi (where i0,n1), n is the number of pixels on Gm, and Yi is the set of determined coefficients after applying least squares approximation. fmXi,Yi is the reconstructed pixels set in the current region.

2. If the error accuracy of the reconstructed region meets the preset error accuracy, i.e., the reconstructed region approximates the original region under the control error, then the partition stops. For m=0,1,2,3, use the least squares method to obtain and record m and fmP of Gm, satisfying e=i=0n1fmXi,YiZi2<ε.

3. For the rectangular sub-region Gm that does not satisfy the condition of e<ε, the current region should be further divided into four smaller subregions in a similar manner, and as this process continues, Gmnn=0,1,2,3 should be examined one by one, recording m,n, and fmnP that satisfy the condition e<ε, and so on.

Although LCT shows many advantages in the field of image processing, whether the image reconstruction problem explored by combining the LCT series and the adaptive non-uniform partition algorithm is superior to the traditional Fourier transform is a problem we need to examine.

4. Simulation Results and Performance Analyses

In this section, we will evaluate our method based on two objective evaluation indicators: Peak Signal-to-Noise Ratio (PSNR) [30] and Structural Similarity Index (SSIM) [31]. We will analyze the proposed method and other methods to determine these under the same ϵ. Furthermore, all simulation experiments were conducted on a 64-bit Windows 11 PC with a 2.10 GHz Intel Core i7 CPU and 32 GB of RAM. The simulation software is MATLAB R2022b.

4.1. Objective Evaluation Standard

The objective evaluation of images is based on the characteristics of human visual perception. It employs mathematical models to quantify the differences between a reference image and the image being tested, and the quality of the image is assessed based on the calculated numerical results. This method has become the standard technology for image quality assessment due to its automated processing abilities and scoring consistency, which is unaffected by individual differences among observers. Typical objective evaluation indicators include the peak signal-to-noise ratio and the structural similarity index, among others.

PSNR is a full-reference image quality evaluation metric based on pixel-level error. It is suitable for rapid assessment of pixel-level distortion. The unit of measurement is the decibel (dB). Generally, a higher PSNR value indicates better image quality, with PSNR values greater than 30 dB considered acceptable and those above 40 dB indicating high quality. However, PSNR is based solely on pixel-level errors and cannot reflect structural information. When it comes to tasks involving human eye perception optimization, it is advisable to combine more advanced metrics such as SSIM and Learned Perceptual Image Patch Similarity (LPIPS).

(36)PSNR=10log10MAXI2MSE,

where MSE=1MNi=1Mj=1NIi,jKi,j2, and Ii,j is the value of the reference image at pixel (i,j), Ki,j is the value of the distorted image at pixel (i,j). MAXI represents the maximum value of the pixels.

SSIM is a full-reference image quality assessment method based on human visual characteristics. It measures the similarity between the reference image and the distorted image from three dimensions: Luminance, Contrast, and Structure. The SSIM values range from 0 to 1, where a higher value indicates less image distortion and greater similarity between the two images.

(37)SSIM(x,y)=(2μxμy+C1)(2σxσy+C2)(μx2+μy2+C1)(σx2+σy2+C2),

where μ is the average value, σ is the standard deviation, and σxy is the covariance of x and y. C1 and C2 are constants.

4.2. Objective Quantitative Analysis

In this subsection, a series of experiments are conducted to verify the feasibility of the algorithm. We use PSNR and SSIM as indicators to evaluate image quality. We set ε=10,100,300. In terms of images, we use six 512 × 512 8-bit grayscale images from Set 14. The number of image partitions reflects time complexity to a certain extent. That is to say, the fewer the number of image partitions, the higher the efficiency of image reconstruction. Therefore, we still need to analyze the number of image partitions to evaluate the efficiency of image reconstruction.

We will now conduct some experiments to compare the advantages of the proposed method. First, we analyze the signal-to-noise ratio, SSIM, and the number of partitions of the proposed algorithm under different values of ε. This will help us evaluate the visual quality of the image reconstruction. Figure 2 indicates that when ε=10,100,300, from left to right, the resulting images are the original images—the reconstructed image when ε=10, the reconstructed image when ε=100, and the reconstructed image when ε=300, respectively. Table 1 presents the signal-to-noise ratio, SSIM index, and the number of image partitions (TPcnt) under varying error accuracy thresholds.

It can be seen from the experimental data in Figure 2 and Table 1 that when ϵ gradually decreases, both the number of partitions generated by the proposed algorithm and the corresponding PSNR and SSIM values illustrates a monotonically increasing trend. It is particularly noteworthy that when ϵ=10, the objective evaluation indicators of most reconstructed images approach the high-quality standard (PSNR> 40 dB, SSIM>0.99). These images also exhibit clear edge details and smooth transitions in texture, as confirmed by the subjective visual evaluation. This sensitivity to the parameter indicates that the refined regional partition strategy effectively captures high-frequency components in the image, especially along contour boundaries and complex texture areas.

Next, under the same conditions, we compare and analyze the partition numbers, PSNR, and SSIM of the Fourier transform algorithm, the adaptive non-uniform rectangular partition algorithm, and the proposed algorithm to demonstrate the superiority of the proposed algorithm. Figure 3 illustrates the original image, the reconstructed image by the proposed algorithm, the reconstructed image by the Fourier algorithm, and the reconstructed image by the adaptive non-uniform rectangular partition algorithm when the error accuracy is 100. Table 2 and Table 3 show the corresponding PSNR, SSIM, and the number of image partitions.

It can be seen from Figure 3 and Table 2 and Table 3 that when the error accuracy is 100, the PSNR of the proposed algorithm is almost the same as that of the Fourier algorithm and the adaptive non-uniform rectangular partition algorithm in the same image. Nevertheless, the proposed algorithm requires significantly fewer partition numbers than the other two methods, which indicates that the proposed algorithm can achieve the same level of reconstructed image quality with lower computational complexity. In addition, the comparison of SSIM values also shows that the proposed algorithm performs better in subjective vision. These advantage originates from the dynamic partitioning strategy and adaptive sampling mechanism, which massively reduces the consumption of computing resource while ensuring accuracy, thereby demonstrating the algorithm’s efficiency and practicality.

5. Conclusions

This paper proposes a fast adaptive non-uniform two-dimensional linear canonical transformation partition algorithm based on the theory of two-dimensional linear canonical transformation series. The goal is to solve the issue of repeated superposition when the original linear canonical transformation approximates the image pixel globally and to apply this algorithm in image representation research. The experimental results indicate that the proposed algorithm can effectively reduce the computational complexity of linear canonical transformation when approximating and reconstructing image pixels. Additionally, it enhances the adaptability of image representation in complex image texture areas. The research results will provide theoretical support for time–frequency joint processing in fields such as remote sensing, image encryption, and medical tomography reconstruction, while also promoting the evolution of high-dimensional time–frequency analysis technology towards efficiency and intelligence.

Author Contributions

W.Z.: Conceptualization, Methodology, Writing—review and editing, Writing—original draft. H.L.: Software, Investigation, Data curation, Writing—original draft. K.U.: Conceptualization, Methodology, Supervision, Writing—review and editing. G.Z.: Software, Data curation. All authors have read and agreed to the published version of the manuscript.

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Figures and Tables

Figure 1 Mesh images at different precisions. (a) ε=10; (b) ε=30; (c) ε=60.

View Image -

Figure 2 Image reconstruction under different parameters. (a) Original images, (b) image reconstruction when ε=10, (c) image reconstruction when ε=100, and (d) image reconstruction when ε=300.

View Image -

Figure 3 When ϵ=100, the image from top to bottom is the original image, the reconstructed image of the proposed algorithm, and the reconstructed image of the Fourier algorithm, respectively. (a) Coastguard, (b) Barbara, (c) Comic, (d) Zebra, (e) Foreman, (f) Flowers.

View Image -

The PSNRs, SSIMs, and TPcnt values of the image under different ϵ conditions.

Image ϵ PSNRs (dB) SSIMs TPcnt
coastguard 10 42.29 0.9914 27,874
100 31.08 0.8158 5837
300 26.43 0.5638 2021
barbara 10 37.34 0.9925 36,074
100 32.63 0.9526 19,280
300 26.67 0.8232 7421
zebra 10 39.91 0.9971 44,083
100 31.27 0.8841 16,004
300 26.69 0.7142 9161
foreman 10 41.62 0.9809 13,318
100 33.17 0.9063 4730
300 28.23 0.8156 2453
flowers 10 40.99 0.9929 30,790
100 32.37 0.9266 10,238
300 26.45 0.7857 4226
comic 10 41.69 0.9965 36,691
100 32.63 0.9549 14,210
300 27.28 0.8654 6122

PSNR (dB), TPcnt, and SSIM comparison between different methods.

Image Coastguard Barbara Comic
PSNRs TPcnt SSIMs PSNRs TPcnt SSIMs PSNRs TPcnt SSIMs
Our proposed 31.08 5837 0.8158 32.63 19,280 0.9526 32.63 14,210 0.9549
Fourier 31.20 9146 0.7818 32.87 24,089 0.8986 32.90 20,057 0.9107
RectPartition 31.14 8390 0.8175 32.86 23,813 0.9511 32.91 19,601 0.9576

PSNR (dB), TPcnt, and SSIM comparison between different methods.

Image Zebra Foreman Flowers
PSNRs TPcnt SSIMs PSNRs TPcnt SSIMs PSNRs TPcnt SSIMs
Our proposed 31.27 16,004 0.8841 33.17 4730 0.9063 32.37 10,238 0.9266
Fourier 31.38 20,357 0.8220 33.39 6491 0.8986 32.41 13,916 0.8729
RectPartition 31.47 18,656 0.8865 33.21 5576 0.9072 32.36 13,541 0.9229

References

1. Wei, D.; Shen, Y. Discrete Complex Linear Canonical Transform Based on Super-differential Operators. Optik; 2021; 230, 166343. [DOI: https://dx.doi.org/10.1016/j.ijleo.2021.166343]

2. Firdous, A.; Azhar, Y. Lattice-based multi-channel sampling theorem for linear canonical transform. Digit. Signal Process.; 2021; 117, 103168. [DOI: https://dx.doi.org/10.1016/j.dsp.2021.103168]

3. Wei, D.; Shen, Y. Fast numerical computation of two-dimensional non-separable linear canonical transform based on matrix decomposition. IEEE Trans. Signal Process.; 2021; 69, pp. 5259-5272. [DOI: https://dx.doi.org/10.1109/TSP.2021.3107974]

4. Morais, J.; Ferreira, M. Hyperbolic linear canonical transforms of quaternion signals and uncertainty. Appl. Math. Comput.; 2023; 450, 127971. [DOI: https://dx.doi.org/10.1016/j.amc.2023.127971]

5. Prasad, A.; Kundu, M. Spectrum of quaternion signals associated with quaternion linear canonical transform. J. Frankl. Inst.; 2024; 361, pp. 764-775. [DOI: https://dx.doi.org/10.1016/j.jfranklin.2023.12.023]

6. Jiang, N.; Feng, Q.; Yang, X.; He, J.R.; Li, B.Z. The octonion linear canonical transform: Properties and applications. Chaos Solitons Fractals; 2025; 192, 116039. [DOI: https://dx.doi.org/10.1016/j.chaos.2025.116039]

7. Dar, A.; Bhat, M. Wigner distribution and associated uncertainty principles in the framework of octonion linear canonical transform. Optik; 2023; 272, 170213. [DOI: https://dx.doi.org/10.1016/j.ijleo.2022.170213]

8. Prasad, A.; Kundu, M. Uncertainty principles and applications of quaternion windowed linear canonical transform. Optik; 2023; 272, 170220. [DOI: https://dx.doi.org/10.1016/j.ijleo.2022.170220]

9. Hu, X.; Cheng, D.; Kou, K.I. Convolution theorems associated with quaternion linear canonical transform and applications. Signal Process.; 2023; 202, 108743. [DOI: https://dx.doi.org/10.1016/j.sigpro.2022.108743]

10. Kou, K.; Liu, M.; Zou, C. Plancherel theorems of quaternion hilbert transforms associated with linear canonical transforms. Adv. Appl. Clifford Algebr.; 2020; 30, 9. [DOI: https://dx.doi.org/10.1007/s00006-019-1034-4]

11. Bhat, M.; Dar, A. Scaled Wigner distribution in the offset linear canonical domain. Optik; 2022; 262, 169286. [DOI: https://dx.doi.org/10.1016/j.ijleo.2022.169286]

12. Chen, J.; Zhang, Y.; Li, B. Graph Linear Canonical Transform: Definition, Vertex-Frequency Analysis and Filter Design. IEEE Trans. Signal Process.; 2024; 72, pp. 5691-5707. [DOI: https://dx.doi.org/10.1109/TSP.2024.3507787]

13. Ravi, K.; Sheridan, J.; Basanta, B. Nonlinear double image encryption using 2d non-separable linear canonical transform and phase retrieval algorithm. Opt. Laser Technol.; 2018; 107, pp. 353-360. [DOI: https://dx.doi.org/10.1016/j.optlastec.2018.06.014]

14. Li, N.; Zhang, Z.; Han, J.; Chen, Y.; Cao, C. Graph Linear Canonical Transform Based on CM-CC-CM Decomposition. IEEE Trans. Pattern Anal. Mach. Intell.; 2025; 159, 105015. [DOI: https://dx.doi.org/10.1016/j.dsp.2025.105015]

15. Zhang, Y.; Li, B. Discrete linear canonical transform on graphs: Uncertainty principle and sampling. Signal Process.; 2025; 226, 109668. [DOI: https://dx.doi.org/10.1016/j.sigpro.2024.109668]

16. Li, Z.; Li, B.; Qi, M. Two-dimensional quaternion linear canonical series for color images. Signal Process. Image Commun.; 2022; 12, pp. 1772-1780. [DOI: https://dx.doi.org/10.1016/j.image.2021.116574]

17. Rakheja, P.; Vig, R.; Singh, P. Double image encryption using 3D Lorenz chaotic system, 2D non-separable linear canonical transform and QR decomposition. Opt. Quantum Electron.; 2020; 52, pp. 1811-1820. [DOI: https://dx.doi.org/10.1007/s11082-020-2219-8]

18. Wang, X.; Wang, D.; Tian, J.; Niu, P. Accurate ternary polar linear canonical transform domain stereo image zero-watermarking. Signal Process.; 2026; 239, 110242. [DOI: https://dx.doi.org/10.1016/j.sigpro.2025.110242]

19. Liu, J.; Zhang, F. Discrete Quaternion Offset Linear Canonical Transform and Its Application. Circuits Syst. Signal Processing; 2025; [DOI: https://dx.doi.org/10.1007/s00034-025-03208-4]

20. Zhao, W.; U, K.; Luo, H. Image representation method based on Gaussian function and non-uniform partition. Multimed. Tools Appl.; 2023; 82, pp. 839-861. [DOI: https://dx.doi.org/10.1007/s11042-022-13213-3]

21. Zhang, Y.; Cai, Z.; Xiong, G. A New Image Compression Algorithm Based on Non-Uniform Partition and U-System. IEEE Trans. Multimed.; 2021; 23, pp. 1069-1082. [DOI: https://dx.doi.org/10.1109/TMM.2020.2992940]

22. Yuan, X.; Cai, Z. An Adaptive Triangular Partition Algorithm for Digital Images. IEEE Trans. Multimed.; 2019; 21, pp. 1372-1383. [DOI: https://dx.doi.org/10.1109/TMM.2018.2881069]

23. Gargari, S.F.; Huang, Z.Y.; Dabiri, S. An upwind moving least squares approximation to solve convection-dominated problems: An application in mixed discrete least squares meshfree method. J. Comput. Phys.; 2024; 506, 112931. [DOI: https://dx.doi.org/10.1016/j.jcp.2024.112931]

24. Hong, X.; Kintak, U. A Multi-Focus Image Fusion Algorithm Based on Non-Uniform Rectangular Partition with Morphology Operation. Proceedings of the 2018 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR); Chengdu, China, 15–18 July 2018; pp. 238-243. [DOI: https://dx.doi.org/10.1109/ICWAPR.2018.8521331]

25. Zhao, W.; Kintak, U.; Luo, H. Adaptive non-uniform partition algorithm based on linear canonical transform. Chaos Solitons Fractals; 2022; 163, 112561. [DOI: https://dx.doi.org/10.1016/j.chaos.2022.112561]

26. Qi, M.; Li, B.Z.; Sun, H.F. Image representation by harmonic transforms with parameters in SL(2,R). J. Vis. Commun. Image Represent.; 2016; 35, pp. 184-192. [DOI: https://dx.doi.org/10.1016/j.jvcir.2015.12.010]

27. Cui, C.F.; Qi, L.Q. A genuine extension of the Moore–Penrose inverse to dual matrices. J. Comput. Appl. Math.; 2025; 454, 116185. [DOI: https://dx.doi.org/10.1016/j.cam.2024.116185]

28. Duan, S.B.; Zhou, S.; Li, Z.L.; Liu, X.; Chang, S.; Liu, M.; Huang, C.; Zhang, X.; Shang, G. Improving monthly mean land surface temperature estimation by merging four products using the generalized three-cornered hat method and maximum likelihood estimation. Remote Sens. Environ.; 2024; 302, 113989. [DOI: https://dx.doi.org/10.1016/j.rse.2023.113989]

29. Fazio, R. Scaling invariance theory and numerical transformation method: A unifying framework. Appl. Eng. Sci.; 2020; 4, 100024. [DOI: https://dx.doi.org/10.1016/j.apples.2020.100024]

30. Zhang, B.; Zhang, Y.; Wang, B.; He, X.; Zhang, F.; Zhang, X. Denoising swin transformer and perceptual peak signal-to-noise ratio for low-dose CT image denoising. Measurement; 2024; 227, 114303. [DOI: https://dx.doi.org/10.1016/j.measurement.2024.114303]

31. Corona, G.; Maciel-Castillo, O.; Morales-Castañeda, J.; Gonzalez, A.; Cuevas, E. A new method to solve rotated template matching using metaheuristic algorithms and the structural similarity index. Math. Comput. Simul.; 2023; 206, pp. 130-146. [DOI: https://dx.doi.org/10.1016/j.matcom.2022.11.005]

© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.