Content area
Purpose
In this study, we present a novel parametric iterative method for computing the polar decomposition and determining the matrix sign function.
Design/methodology/approach
This method demonstrates exceptional efficiency, requiring only two matrix-by-matrix multiplications and one matrix inversion per iteration. Additionally, we establish that the convergence order of the proposed method is three and four, and confirm that it is asymptotically stable.
Findings
In conclusion, we extend the iterative method to solve the Yang-Baxter-like matrix equation. The efficiency indices of the proposed methods are shown to be superior compared to previous approaches.
Originality/value
The efficiency and accuracy of our proposed methods are demonstrated through various high-dimensional numerical examples, highlighting their superiority over established methods.
1. Introduction
In recent years, much attention has been given in the literature to solve matrix equations (Erfanifar and Hajarian, 2023, 2024a, b, d; Erfanifar et al., 2023). Consider the quadratic matrix equation(1.1)where , is called the nonlinear Yang-Baxter-like matrix equation (YBLME). The term “Yang-Baxter equation” actually stems from the work of statistical physicists C.N. Yang and R.J. Baxter in the 1970s. The equation that they introduced is a fundamental concept in theoretical physics, especially in the context of integrable systems, statistical mechanics, and quantum groups. The Yang-Baxter equation in mathematics and physics has many applications, including in statistical mechanics, quantum physics, knot theory, and quantum computation (Tian, 2016; Dong and Ding, 2016; Mansour et al., 2017).
The YBLME possesses significant relevance in the realms of completely integrable quantum and classical systems, as well as in exactly solvable models within statistical physics. In recent years, this equation has garnered substantial attention, undergoing intensive study due to its implications and applications. Additionally, its profound connections with areas of mathematics such as group theory and algebraic geometry have become increasingly evident (Kumar et al., 2022; Baxter, 1972).
Integrable quantum theory is a broad field with applications spanning condensed matter and statistical physics, establishing numerous connections to various branches of mathematics. It presents a distinctive avenue to explore the intricacies of quantum field theory, enabling a more profound understanding of relativistic particle theories by treating them as scaling limits of quantum chains and classical lattice systems. This approach offers deeper insights beyond conventional methods. Acquiring knowledge in integrable quantum theory equips theoretical physicists with valuable tools and contributes to a broader comprehension of subjects that are of interest to mathematicians.
Over the last 3 decades, the study of YBLME has extended beyond statistical physics and garnered interest from mathematicians due to its profound connections to various branches of pure mathematics. Notably, recent advancements in knot theory have revealed the formulation of knot invariants using statistical mechanics methodologies. While this novel approach to knot invariants can be explored in diverse contexts, a particularly insightful perspective is provided by statistical mechanics, with YBLME at its core. YBLME’s significance manifests in knot invariants, where the cabling approach contributes to constructing new explicit solutions of YBLME derived from existing ones. Solutions to these equations have been uncovered as commuting objects arising from second tensor products of deformations of Lie algebras, specifically quantum groups, in their standard representations. Additionally, YBLME plays a pivotal role in analyzing integrable systems, theoretical physics, quantum and statistical mechanics, knot theory, and the theory of quantum groups. Moreover, the theory of integrable Hamiltonian systems greatly benefit from solutions to the one-parameter form of the Yang-Baxter equation, as coefficients from the power series expansion of such solutions facilitate the computation of integrals of motion. YBLME finds applications in various domains, including differential equations, braid groups, and numerous other disciplines (Bayoumi, 2023; Roberts, 1980; Alhayani and Abdallah, 2021; Ballester-Bolinches et al., 2021; Castelli et al., 2021; Bachiller and Cedó, 2014).
The YBLME possesses at least two trivial solutions: X = 0 and X = A. The main focus lies in discovering nontrivial solutions to (1.1). This pursuit has proven to be challenging due to the intricate nature of characterizing the entire set of solutions for a general matrix A (Ding and Rhee, 2012, 2013).
In this paper, we introduce innovative parametric iterative methods aimed at determining the polar decomposition of a matrix. Additionally, we present a new approach for computing the sign function of a matrix. Finally, we extend the iterative methods to compute solutions for the YBLME.
The sign function of a nonsingular square matrix holds significant mathematical value and finds applications across various mathematical domains (Soheili et al., 2015; Nakatsukasa and Freund, 2016; Gomilko et al., 2012; Erfanifar and Hajarian, 2024c).
Suppose that A has the following Jordan canonical as follows:(1.2)where Λ = diag(Λ1, Λ2), the eigenvalues of lie in open left half-plane and those of lie in open right half-plane, then from Roberts (1980), we haveThe matrix S is the matrix sign of A having no pure imaginary eigenvalue.
In the following, some properties of sign(A) are collected.
- The equation g(x) = 1 has unique solution in the interval [0, 1) as follows:
- The function g is increasing on (0, T). So, if xk ∈ (0, T), then there is an index k0 ≥ k such that , in fact .
- If xk ∈ (T, 1), then we have xk+1 ∈ (1, λ).
- If xk+1 ∈ (1, λ), then the sequence converges to x = 1.
According to the mentioned notes, it is concluded that the sequence xk+1 = g(xk) is convergent to x = 1, and since g′(1) = g″(1) = 0, the convergence order is three.
- Since , the critical point of the function g is obtained as follows
- The proof (iii) is easily obtained similar to previous parts. □
In the following, we propose a method for finding the factor U of the polar decomposition of a matrix. By using the sequence (2.5), we can obtain(3.2)
Regarding the iterative method (3.2), the following results are obtained.
Proof. By using the singular value decomposition of A in the form(3.3)where and are unitary matrices andNote that if Range(U*) = Range(H), then U and H are unique. Hence, one can use a result from Ben-Israel and Greville (2003) about the uniqueness U and H as follows:where Ur which is made of the first r columns of P and Vr is made of the first r columns of Q. Define(3.4)Subsequently, by substituting (3.4) in (3.2), we have(3.5)Since is diagonal with nonnegative entries, therefore, it is defined by induction on kwhere zeros may not exist. Accordingly, (3.5) represents r iterations as follows:(3.6)
With simple manipulations, and using (3.6), the following relationship is obtained:(3.7)Since σi is positive, and (3.7) holds for every i, therefore, we haveand we can obtain for every i:Thus, as . The proof is complete. □
In the following, we prove that the method (3.2) is third-order convergent.
Proof. By applying Theorem 3.2, the method (3.2) transforms the singular values of Uk according to(3.8)
From (3.8), we indicate that convergence of is third order for k ≥ 1:(3.9)Then, we have(3.10)Thus the method (3.2) has third-order convergence.□
In the following, we propose two third-order convergence iterative methods. For example, if , then we have(3.11)and if , then we obtain(3.12)
Proof. According to (3.9), if , then we obtain(3.13)Therefore,(3.14)
Finally, the order of convergence method (3.2) for is four.□
Here is the provided method for finding the unitary matrix A:(3.15)or equivalently:(3.16)
Note that the methods (3.16), (3.11) and (3.12) are denoted by R8, R9 and R10, respectively.
3.1 Extension of the methods for matrix sign
In the following, we explore an application of the iterative method (3.2) to compute the matrix sign function. A connection exists between the polar decomposition and the matrix sign function. Table 2 showcases established methods utilized for determining the matrix sign function.
Now, we develop the proposed methods in this study to find the matrix sign function. In fact, the iterative methods (3.16), (3.11) and (3.12) are(3.17)
(3.18)and(3.19)respectively.
Note that the methods (3.17), (3.18) and (3.19) are denoted by E6, E7 and E8, respectively.
An equilibrium point is said to be stable if for some initial value close to the equilibrium point, the solution will eventually stay close to the equilibrium point.
An equilibrium point is said to be asymptotically stable if for some initial value close to the equilibrium point, the solution will converge to the equilibrium point.
Proof. Let Δk be a numerical perturbation introduced at the k-th iterate of (3.17). Then, we haveNow, according to Soleymani et al. (2015), for any nonsingular matrix E and the matrix F, we can getand assuming Yk ≈ sign(A) = S for enough large k, S2 = I, S−1 = S, and also , i ≥ 2, we havesincetherefore, we can writenow, we can conclude that the perturbation at the iterate k + 1 is bounded, and we haveThen, the iterative method (3.17) is asymptotically stable. This ends the proof. □
Proof. Since all matrices have a Jordan canonical form, like, A = WΛW−1, where Λ includes Jordan blocks and W is a nonsingular matrix. Therefore, let A have a Jordan canonical form arranged ashere C and N are square Jordan blocks corresponding to eigenvalues lying in and , where and are open left-half complex plane and open right-half complex plane, respectively.
As we know A is invertible, according to Higham (2008), we have(3.20)Then we can get(3.21)where μ1, μ2, …, μq and μq+1, μq+2, …, μn eigenvalues lying on the main diagonals of C and N, respectively.
In the following, we define Hk = W−1YkW, and from the method (3.17), we obtain(3.22)note that if H0 is a diagonal matrix, therefore all the other Hk are diagonal. Now it is enough to prove that from (3.22), the iterative method (3.17) converges to sign(A).
We can write (3.22) to solve h(y) = y2 − 1 = 0, given by(3.23)where , 1 ≤ i ≤ n. By using (3.22) and (3.23), it suffices to prove that converges to sign(μi) = si = ±1.Now since , we obtainsoAs a result limk→∞Hk = sign(Λ), and from Hk = W−1YkW, we can get
Finally, the proof is completed.□
In the following, we give some theorems for the relationship between the matrix sign function and the solutions of YBLME.
(1)S1 is the sign function of G = A + βI, then and are solutions of (1.1).
(2)S2 is the sign function of G = βA, then and are solutions of (1.1).
Algorithm 1 proposes the method (3.19) to solve YBLME.
4. Efficiency index
The concept of efficiency stands as a cornerstone in numerical methods. Enhancing efficiency, often synonymous with effectiveness in engineering contexts, involves minimizing resource consumption in an operation. As high-speed methods may incur substantial costs, establishing an efficiency index for numerical techniques becomes crucial. Hence, prioritizing a theoretical discourse on practical implications becomes pivotal to achieve faster implementation times in problem-solving endeavors.
Note that to obtain a fair comparison, if the cost of m is β, then the cost of c would be more than 2β.
Now, the approximate value EI for the proposed methods would be in Table 1 wherein si, is all the number iterations required for the convergence of the methods, respectively in the same environment. For example, if si for method R1 is s, then for method R4 is . Table 3 shows s, m, c and EI of the proposed methods. Figure 2 shows EI = 1 + ɛ of each method for s.
EI supports the superiority of the new method for solving some numerical examples in the next section.
5. Numerical examples
We compare the iterative methods for several examples. All programming runs with MATLAB software and the criterion for stopping is:Furthermore, we set .
We investigate square random matrices of various dimensions, and the outcomes are visually illustrated in Figure 9.
Figures 5–8 along with Tables 6–11 illustrate the iteration counts and computational cost of each method. Notably, the iterative methods (3.16), (3.11), and (3.12) for determining the polar decomposition of matrices, as well as the iterative methods (3.17), (3.18), and (3.19) for computing the sign function of matrices, outperform the others significantly. Tables 6 through 11 exhibit ‖AXA − XAX‖, thereby showcasing that the solution of YBLME is efficiently achieved at an optimal computational expense.
To conclude, we investigated several random real matrices and random complex matrices with varying dimensions. The obtained results are visually depicted in Figure 9.
6. Conclusions
The solution to the quadratic matrix equationhas been instrumental in solving numerous problems in knot theory, differential equations, braid groups, and quantum groups. One approach to solving this equation involves utilizing the sign function. In this study, we extensively explored new iterative methods for computing the matrix polar decomposition and sign function. The investigation demonstrated that the proposed methods exhibits global convergence of orders three and four. Through comprehensive testing with various examples, it was evident that the proposed methods showcased significant superiority over other well-established methods.
The authors wish to express their gratitude to the editor and anonymous reviewers for helpful remarks and suggestions.
Conflicts of interest: The authors have no conflict of interest to declare.
Data availability: The data that support the findings of this study are available from the corresponding author upon reasonable request.
Figure 1
Plots of the line y = x and the function g for different values of b
[Figure omitted. See PDF]
Figure 2
EI of mentioned methods for s
[Figure omitted. See PDF]
Figure 3
The residual of the results of the several iterative methods for matrix A with n = 20, 30
[Figure omitted. See PDF]
Figure 4
The residual of the results of the several iterative methods for matrix B with n = 20, 30
[Figure omitted. See PDF]
Figure 5
The residual of the results of the several iterative methods for matrix C with n = 20, 30
[Figure omitted. See PDF]
Figure 6
The residual of the results of the several iterative methods for matrix D with n = 20, 30
[Figure omitted. See PDF]
Figure 7
The residual of the results of the several iterative methods for matrix E with n = 20, 30
[Figure omitted. See PDF]
Figure 8
The residual of the results of the several iterative methods for matrix F with n = 20, 30
[Figure omitted. See PDF]
Figure 9
Numerical results for random real and random complex matrices with different dimensions
[Figure omitted. See PDF]
Table 1
Methods and their computational efficiency
| Method | Form | si | m | c | p | EI |
|---|---|---|---|---|---|---|
| R1 (Kovarik, 1970) | s | 3 | 1 | 2 | ||
| R2 (Higham, 1986) | s | 0 | 1 | 2 | ||
| R3 (Gander, 1990) | s | 2 | 1 | 2 | ||
| R4 (Gander, 1985) | 3 | 1 | 3 | |||
| R5 (Haghani and Soleymani, 2015) | 5 | 1 | 4 | |||
| R6 (Khaksar Haghani, 2014) | 4 | 1 | 3 | |||
| R7 (Soleymani et al., 2016) | 6 | 1 | 6 |
Source(s): Created by authors
Table 2
The methods to find the matrix sign function
| Method | Form |
|---|---|
| E1 | |
| E2 | |
| E3 | |
| E4 | |
| E5 |
Source(s): Created by authors
Table 3
Methods and their computational efficiency
| Method | Form | si | m | c | p | EI |
|---|---|---|---|---|---|---|
| R8 | 2 | 1 | 4 | |||
| R9 | 2 | 1 | 3 | |||
| R10 | 2 | 1 | 3 |
Source(s): Created by authors
Table 4
Several matrices for Example 5.1
| Matrix | Entry |
|---|---|
| A (Pascal) | ai1 = a1j = 1, aij = ai−1,j + ai,j−1 |
| B (Lotkin) | a1j = 1, |
| C (Hilbert,Hilbert) |
Source(s): Created by authors
Table 5
Results of iterations for matrix A
| n | Method | R1 | R2 | R3 | R4 | R5 | R6 | R7 | R8 | R9 | R10 |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 10 | m | 111 | 0 | – | 69 | 65 | 72 | 72 | 38 | 26 | 24 |
| c | 37 | 37 | – | 23 | 13 | 18 | 12 | 19 | 13 | 12 | |
| m + 2c | 185 | 74 | – | 115 | 91 | 108 | 96 | 76 | 52 | 48 | |
| CPU | 0.0200 | 0.0162 | – | 0.0127 | 0.0139 | 0.0117 | 0.0111 | 0.0091 | 0.0083 | 0.0055 | |
| 20 | m | 225 | 0 | – | 147 | 130 | 148 | 144 | 76 | 48 | 44 |
| c | 75 | 123 | – | 49 | 26 | 37 | 24 | 38 | 24 | 22 | |
| m + 2c | 375 | 246 | – | 445 | 182 | 222 | 192 | 152 | 96 | 88 | |
| CPU | 0.0210 | 0.0202 | – | 0.0142 | 0.0143 | 0.0149 | 0.0135 | 0.0131 | 0.0105 | 0.0084 | |
| 30 | m | 285 | – | – | 171 | 170 | 176 | 180 | 98 | 60 | 54 |
| c | 95 | – | – | 57 | 34 | 44 | 30 | 49 | 30 | 27 | |
| m + 2c | 475 | – | – | 285 | 238 | 264 | 240 | 196 | 120 | 108 | |
| CPU | 0.0538 | – | – | 0.0363 | 0.0246 | 0.0265 | 0.0258 | 0.0236 | 0.0185 | 0.0175 |
Source(s): Created by authors
Table 6
Results of iterations for matrix B
| n | Method | R1 | R2 | R3 | R4 | R5 | R6 | R7 | R8 | R9 | R10 |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 10 | m | 147 | 0 | – | 93 | 90 | 96 | 96 | 52 | 34 | 30 |
| c | 49 | 49 | – | 31 | 18 | 24 | 16 | 26 | 17 | 15 | |
| m + 2c | 245 | 98 | – | 155 | 126 | 144 | 128 | 104 | 68 | 60 | |
| CPU | 0.0200 | 0.0176 | – | 0.0132 | 0.0115 | 0.0120 | 0.0125 | 0.0099 | 0.0077 | 0.0052 | |
| 20 | m | 216 | 126 | – | 135 | 115 | 128 | 132 | 70 | 42 | 40 |
| c | 72 | 72 | – | 45 | 23 | 32 | 22 | 35 | 21 | 20 | |
| m + 2c | 360 | 270 | – | 225 | 161 | 192 | 176 | 140 | 84 | 80 | |
| CPU | 0.0228 | 0.0112 | – | 0.0170 | 0.0189 | 0.0149 | 0.0171 | 0.0134 | 0.0089 | 0.0080 | |
| 30 | m | 198 | – | – | 126 | 130 | 132 | 126 | 70 | 44 | 40 |
| c | 66 | – | – | 42 | 26 | 33 | 21 | 35 | 22 | 20 | |
| m + 2c | 332 | – | – | 110 | 182 | 198 | 168 | 140 | 88 | 80 | |
| CPU | 0.0324 | – | – | 0.0204 | 0.0169 | 0.0171 | 0.0177 | 0.0158 | 0.0108 | 0.0093 |
Source(s): Created by authors
Table 7
Results of iterations for matrix C
| n | Method | R1 | R2 | R3 | R4 | R5 | R6 | R7 | R8 | R9 | R10 |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 10 | m | 147 | 0 | – | 93 | 90 | 96 | 96 | 52 | 32 | 30 |
| c | 49 | 49 | – | 31 | 18 | 24 | 16 | 26 | 16 | 15 | |
| m + 2c | 245 | 98 | – | 155 | 126 | 144 | 128 | 104 | 64 | 60 | |
| CPU | 0.0196 | 0.0159 | – | 0.0159 | 0.0129 | 0.0120 | 0.0123 | 0.0114 | 0.0087 | 0.0069 | |
| 20 | m | 195 | 214 | – | 129 | 115 | 124 | 126 | 74 | 42 | 40 |
| c | 65 | 107 | – | 43 | 23 | 31 | 21 | 37 | 21 | 20 | |
| m + 2c | 325 | 328 | – | 215 | 161 | 186 | 168 | 148 | 84 | 80 | |
| CPU | 0.0241 | 0.0254 | – | 0.0160 | 0.0151 | 0.0181 | 0.0172 | 0.0144 | 0.0109 | 0.0098 | |
| 30 | m | 210 | – | – | 123 | 120 | 128 | 126 | 70 | 46 | 40 |
| c | 67 | – | – | 41 | 24 | 32 | 21 | 35 | 23 | 20 | |
| m + 2c | 335 | – | – | 205 | 168 | 192 | 168 | 140 | 92 | 80 | |
| CPU | 0.0582 | – | – | 0.0335 | 0.0267 | 0.0288 | 0.0265 | 0.0253 | 0.0187 | 0.0156 |
Source(s): Created by authors
Table 8
Several matrices for Example 5.2
| Matrix | Entry |
|---|---|
| D (Hilbert) | |
| E (Cauchy) | |
| F(Lehmer,Lotkin) |
Source(s): Created by authors
Table 9
Results of iterations for matrix D
| n | Method | E1 | E2 | E3 | E4 | E5 | E6 | E7 | E8 |
|---|---|---|---|---|---|---|---|---|---|
| 10 | m | 0 | 162 | 93 | 135 | 144 | 52 | 32 | 30 |
| c | 49 | 0 | 31 | 27 | 24 | 26 | 16 | 15 | |
| m + 2c | 98 | 98 | 155 | 189 | 192 | 104 | 648 | 60 | |
| CPU | 0.0075 | 0.0123 | 0.0097 | 0.0104 | 0.0115 | 0.0104 | 0.0067 | 0.0052 | |
| ‖AXA − XAX‖ | 1.02E-15 | 1.25E-15 | 2.68E-14 | 8.54E-15 | 3.60E-15 | 1.69E-15 | 8.54E-15 | 1.05E-15 | |
| 20 | m | 0 | 210 | 126 | 170 | 180 | 70 | 44 | 40 |
| c | 69 | 0 | 42 | 34 | 30 | 35 | 22 | 20 | |
| m + 2c | 138 | 210 | 210 | 238 | 240 | 140 | 88 | 80 | |
| CPU | 0.0148 | 0.0183 | 0.0122 | 0.0134 | 0.0152 | 0.0130 | 0.0104 | 0.0069 | |
| ‖AXA − XAX‖ | 2.6105 | 2.30E-13 | 1.09E-14 | 5.73E-14 | 1.83E-12 | 1.81E-14 | 1.30E-14 | 6.61E-14 | |
| 30 | m | 0 | – | 126 | 180 | 186 | 72 | 44 | 44 |
| c | 66 | – | 42 | 36 | 31 | 36 | 22 | 22 | |
| m + 2c | 132 | – | 210 | 252 | 248 | 144 | 88 | 88 | |
| CPU | 0.0162 | – | 0.0137 | 0.0180 | 0.0191 | 0.0162 | 0.0099 | 0.0086 | |
| ‖AXA − XAX‖ | 3.2969 | – | 1.88E-14 | 8.22E-14 | 1.54E-11 | 3.26E-14 | 2.49E-14 | 4.45E-14 |
Source(s): Created by authors
Table 10
Results of iterations for matrix E
| n | Method | E1 | E2 | E3 | E4 | E5 | E6 | E7 | E8 |
|---|---|---|---|---|---|---|---|---|---|
| 10 | m | 0 | 0 | 96 | 140 | 150 | 54 | 34 | 32 |
| c | 51 | 84 | 32 | 28 | 25 | 27 | 17 | 16 | |
| m + 2c | 102 | 168 | 160 | 196 | 200 | 108 | 68 | 64 | |
| CPU | 0.0054 | 0.0053 | 0.0034 | 0.0043 | 0.0049 | 0.0031 | 0.0021 | 0.0019 | |
| ‖AXA − XAX‖ | 6.98E-15 | 1.25E-15 | 3.05E-16 | 5.64E-14 | 8.23E-13 | 1.45E-15 | 8.69E-15 | 3.27E-16 | |
| 20 | m | 0 | – | -132 | 170 | 180 | 72 | 42 | 38 |
| c | 64 | – | 44 | 34 | 30 | 36 | 21 | 19 | |
| m + 2c | 128 | – | 220 | 238 | 240 | 144 | 84 | 76 | |
| CPU | 0.0105 | – | 0.0080 | 0.0093 | 0.0105 | 0.0095 | 0.0072 | 0.0048 | |
| ‖AXA − XAX‖ | 0.5635 | – | 4.51E-14 | 5.26E-14 | 2.77E-13 | 4.70E-15 | 1.33E-14 | 1.26E-14 | |
| 30 | m | 0 | – | 123 | 175 | 186 | 68 | 44 | 40 |
| c | 69 | – | 41 | 35 | 31 | 34 | 22 | 20 | |
| m + 2c | 138 | – | 205 | 245 | 248 | 136 | 88 | 80 | |
| CPU | 0.0162 | – | 0.0093 | 0.0088 | 0.0099 | 0.0061 | 0.0041 | 0.0037 | |
| ‖AXA − XAX‖ | 8.54E-15 | – | 6.58E-15 | 5.32E-15 | 8.54E-15 | 8.02E-15 | 4.56E-15 | 9.34E-15 |
Source(s): Created by authors
Table 11
Results of iterations for matrix F
| n | Method | E1 | E2 | E3 | E4 | E5 | E6 | E7 | E8 |
|---|---|---|---|---|---|---|---|---|---|
| 10 | m | 0 | 34 | 21 | 30 | 30 | 12 | 10 | 10 |
| c | 11 | 0 | 7 | 6 | 5 | 6 | 5 | 5 | |
| m + 2c | 22 | 34 | 35 | 42 | 40 | 24 | 20 | 20 | |
| CPU | 0.0053 | 0.055 | 0.0060 | 0.0082 | 0.0097 | 0.0062 | 0.0055 | 0.0038 | |
| ‖AXA − XAX‖ | 8.09E-14 | 9.49E-14 | 8.04E-14 | 6.25E-14 | 6.44E-14 | 8.69E-14 | 3.54E-14 | 4.52E-14 | |
| 20 | m | 0 | 40 | 27 | 35 | 54 | 14 | 12 | 12 |
| c | 13 | 0 | 9 | 7 | 9 | 7 | 6 | 6 | |
| m + 2c | 26 | 40 | 45 | 49 | 72 | 28 | 24 | 24 | |
| CPU | 0.0093 | 0.0104 | 0.0106 | 0.0108 | 0.0129 | 0.0109 | 0.0098 | 0.0060 | |
| ‖AXA − XAX‖ | 8.54E-15 | 5.98E-13 | 9.58E-13 | 9.54E-13 | 4.56E-13 | 1.42E-13 | 8.04E-13 | 3.50E-13 | |
| 30 | m | 0 | 44 | 27 | 40 | 48 | 16 | 12 | 12 |
| c | 14 | 0 | 9 | 8 | 8 | 8 | 6 | 6 | |
| m + 2c | 28 | 44 | 45 | 56 | 64 | 32 | 24 | 24 | |
| CPU | 0.0096 | 0.0094 | 0.0096 | 0.0121 | 0.0129 | 0.0106 | 0.0094 | 0.0071 | |
| ‖AXA − XAX‖ | 8.54E-15 | 8.59E-13 | 6.04E-13 | 8.24E-13 | 1.24E-12 | 5.98E-12 | 1.54E-13 | 9.66E-13 |
Source(s): Created by authors
© Emerald Publishing Limited.
