Content area
Nonlinear matrix equations
Introduction
Jin and Zhai in [1] discussed using fixed-point iterative algorithms to compute the unique solution of the nonlinear matrix equation
1
where p and m are positive integer, are nonsingular matrices, A and B are positive definite matrices. In fact, the nonlinear matrix equation (1) is the generalized version of the nonlinear matrix equation2
studied by [2], the nonlinear matrix equation3
stuidied by [3–6], and the more related matrix equations like and treated in [7–10].If , the matrix equation (2) is a discrete algebraic Riccati equation, which often arises in control theory, ladder networks, dynamic programming, stochastic filtering, see [11–14] and the references therein. It has been proved that a unique positive definite solution exists if A and B are positive definite matrices. For the case , the matrix equation (3) is the famous Stein equation that is of great importance in signal processing, system and control theory, and many others [15], and it has a unique positive definite solution if , where denotes the spectral radius. The existence and uniqueness of the Hermitian positive definite solution of the matrix equation (2) has been studied by several scholars. Lee and Lim [16] showed that, for positive semidefinite matrices A and B, is nonexpansive with respect to the Riemannian metric distance. By the nonexpansive property of , Jung, Kim and Lim [17] showed that, when A and B are positive semidefinite matrices, the matrix equation (2) with has a unique positive definite solution.
In this paper, a fixed-point accelerated iterative algorithm to solve the nonlinear matrix equation (1) is proposed, and, based on the basic characteristics of Thompson metric space, the convergence of the proposed algorithm is proved. The error estimation formula of the iterative solution and some numerical example illustrating the effectiveness of the algorithm are given.
Preliminaries
Let denote the set of Hermitian matrices, denote the set of Hermitian positive semidefinite matrices, and denote the set of Hermitian positive definite matrices. The notation represents the Frobenius norm of matrix A. For the matrices B and C, means , means , means . , and indicate the maximum and minimum eigenvalues of the Hermitian matrix respectively, and I indicates the identity matrix.
Definition 2.1
[18] Let , the Thompson distance between the Hermitian positive definite matrices A and B is defined as follows: where
A space with a defined Thompson distance is a called Thompson metric space. For a Thompson metric space, the following Lemma 2.1, Lemma 2.2, and Lemma 2.3 hold.
Lemma 2.1
[19, 20] For any nonsingular matricesW, the following equalities and inequalities hold:
Lemma 2.2
[19, 20] For any, the following equalities hold:
Lemma 2.3
[19, 20] For any, , the following inequality holdswhere, .
In addition, the following Lemma 2.4 can be obtained by direct calculation according to the Thompson metric space definition.
Lemma 2.4
For the constant, , we have.
Lemma 2.5
[21] If, thenfor any, . Similarly, if, thenfor any, .
Lemma 2.6
Assume thatXis a positive definite solution of the nonlinear matrix equation (1), then the following inequality holds:
Proof
Since X is a positive definite solution of the nonlinear matrix equation (1), that is Hence, . Furthermore, we have by Lemma 2.5 that Hence, we have Hence, the result holds. □
Lemma 2.7
Assume thatXis a positive definite solution of the nonlinear matrix equation (1), then the inequalityholds for any nonsingular matrixAand any positive definite matrixB.
Proof
It is known from Lemma 2.1, Lemma 2.3, Lemma 2.5, and Lemma 2.6 that Hence, the result holds. □
Let
4
Then, we have the following Lemma 2.8.Lemma 2.8
The matrix functionsatisfies the Lipschitz condition on Ω, that is, for any, we havewhere
5
Proof
It is known from Lemma 2.1, Lemma 2.2, Lemma 2.3, and Lemma 2.7 that Hence, the result holds. □
Lemma 2.9
[22] Ifϕis a compression mapping on the distance space Ω and the compression coefficient, then the mappingϕhas a unique fixed point on Ω.
According to Lemma 2.8 and Lemma 2.9, the following lemma can be immediately obtained.
Lemma 2.10
The nonlinear matrix equation (1) has a unique positive definite solution, and, furthermore, its unique positive definite solutionsatisfiesgiven as (4).
Iterative algorithm and its convergence
In this section we first propose a fixed-point accelerated iterative algorithm for solving the nonlinear matrix equation (1), and then, based on the basic characteristics of the Thompson distance, prove the convergence of the proposed algorithm and the error estimation formula of the iterative solution. Let . we propose a fixed-point acceleration iterative algorithm to solve the nonlinear matrix equation (1), as in Algorithm 1.
[See PDF for image]
Algorithm 1
Fixed-point accelerated algorithm to solve (1)
Theorem 3.1
The nonlinear matrix equation (1) has a unique positive definite solutionin Ω given as (4), and for any initial matrix, the matrix sequencegenerated by Algorithm 1 converges to the unique positive definite solution. Furthermore, the error-estimation formula of the iteration solution with the true solution of the nonlinear equation (1) can be expressed as
6
whereis the largest integer not exceeding, andis the compression coefficient of the matrix functiongiven as Lemma 2.8.Proof
The result that the nonlinear matrix equation (1) has a unique positive definite solution in Ω given as (4) can be obtained directly by Lemma 2.10, and hence we do not prove it here. We prove the conclusion (6) by induction. For , we have by Lemma 2.4 and Lemma 2.8 that Assume that the inequality holds for . Then, we have again by Lemma 2.4 and Lemma 2.8 that, when , and when , Hence, by the principle of induction, the inequality holds for k.
Note that for the compression coefficient , we have This means that the matrix sequence generated by Algorithm 1 converges to the unique positive definite solution of the nonlinear matrix equation (1), and the error estimation formula can be expressed as (6). □
Numerical experiments
In this section, some numerical experiments will be tested to illustrate the feasibility and superiority of Algorithm 1. The experiments are divided into two parts. In the first part, the relationship between the backtracking steps and the backconvergence of the Algorithm 1 is analyzed, and the selection ranges of the best backtracking steps are given. In the second part, the comparison of the convergence of Algorithm 1 with thefixed-point iterative algorithm are given. In all the numerical experiments, the matrices s, A and B are randomly generated by MATLAB, and furthermore, to make sure the matrix equation (1) has a solution, the matrices A and B are chosen as In all tests, the initial iteration matrix is chosen as . The termination condition is the residual satisfies In all data results are obtained in the Window10, 64-bit, Matlab R2013a environment.
Selection of the backtracking step
Backtracking steps are closely related to the convergence speed of the algorithm 1. Consider the nonlinear matrix equation
7
It can be seen from Fig. 1 that the convergence speed of Algorithm 1 is directly affected by the backtracking step . In general, when is chosen as 2 through 10, Algorithm 1 is relatively more efficient than other cases. However, how to select the best backtracking steps is an important problem that should be studied in future.
[See PDF for image]
Figure 1
Backtracking steps and calculation time curve with different matrix sizes
Comparison of convergence with existing algorithms
The fixed-point iterative algorithm [1] used to solve the nonlinear matrix equation (1) has the following iterative formula:
Note that Algorithm 1 is most efficient when the backtracking steps are chosen as 2 to 10. Hence, for all of the following tests, the backtracking steps in Algorithm 1 is fixed at 3. In addition, for the convenience of the expression, we use the notations FIXA and FIX denote Algorithm 1 and the fixed-point iterative algorithm proposed in [1], respectively. In all tests, the maximum number of inner iterations (if necessary) and outer iterations are restricted to 20 000, and the error tolerance parameter ε chosen as .
Table 1 and Fig. 2 report the comparison of FIXA and FIX to solve the nonlinear matrix equation (7) of the iteration numbers (Iter), computing time (Time), and residual () with different matrix sizes.
[See PDF for image]
Figure 2
Compute time comparison of FIXA and FIX under different matrix sizes
Table 1. Numerical comparison of FIXA and FIX with different matrix sizes
n | Algor. | Iter | Time | n | Algor. | Iter | Time | ||
|---|---|---|---|---|---|---|---|---|---|
5 | FIXA | 5 | 0.0046 | 7.0154 × 10−10 | 100 | FIXA | 6 | 0.0622 | 7.8909 × 10−10 |
FIX | 7 | 0.0007 | 7.9109 × 10−11 | FIX | 7 | 0.0950 | 9.3716 × 10−10 | ||
10 | FIXA | 7 | 0.0025 | 6.2044 × 10−10 | 200 | FIXA | 6 | 0.3148 | 8.3435 × 10−10 |
FIX | 9 | 0.0015 | 9.2425 × 10−11 | FIX | 7 | 0.4937 | 9.8527 × 10−10 | ||
15 | FIXA | 7 | 0.0027 | 9.4173 × 10−10 | 400 | FIXA | 6 | 1.3501 | 6.8784 × 10−11 |
FIX | 8 | 0.0023 | 8.7336 × 10−10 | FIX | 6 | 1.8175 | 9.9478 × 10−10 | ||
20 | FIXA | 7 | 0.0121 | 7.0154 × 10−11 | 800 | FIXA | 6 | 7.4306 | 7.8909 × 10−11 |
FIX | 8 | 0.0035 | 7.9109 × 10−10 | FIX | 6 | 9.7435 | 9.3716 × 10−10 | ||
25 | FIXA | 7 | 0.0075 | 6.2044 × 10−11 | 1200 | FIXA | 6 | 24.2881 | 8.3435 × 10−11 |
FIX | 8 | 0.0054 | 9.2425 × 10−10 | FIX | 6 | 34.7566 | 1.1527 × 10−11 | ||
30 | FIXA | 6 | 0.0115 | 9.4173 × 10−11 | 1600 | FIXA | 6 | 58.9271 | 6.8784 × 10−11 |
FIX | 7 | 0.0100 | 8.7336 × 10−10 | FIX | 6 | 84.3537 | 9.9478 × 10−11 | ||
50 | FIXA | 7 | 0.0252 | 6.2044 × 10−11 | 2000 | FIXA | 5 | 97.3229 | 8.3435 × 10−10 |
FIX | 8 | 0.0279 | 9.2425 × 10−11 | FIX | 6 | 172.6251 | 1.1527 × 10−11 |
Based on the tests reported in Table 1, Fig. 2, and many other performed unreported tests that show similar patterns, we have the following results:
Remark 4.1
The fixed-point iteration method is a very stable algorithm. Especially for the relatively small scale matrix size, the fixed-point iteration method is very effective. As can be seen from Table 1 and Fig. 2, when the matrix dimension is less than 50, the fixed-point iteration method is more effective than Algorithm 1. However, for the large-scale matrix size, especially for the nonlinear matrix equation (1) with positive integer m more than 3, the compute time to solve the nonlinear matrix equation (1) is quite large.
Remark 4.2
As shown in Table 1 and Fig. 2, Algorithm 1 is more effective than the fixed iteration method and the Newton iterative method. In particular, for the nonlinear matrix equation (1) with the large matrix dimensions n and the large positive integer m, Algorithm 1 is still very effective.
Conclusions
In this paper, a fixed-point acceleration iteration algorithm (Algorithm 1) to solve the nonlinear matrix equations is proposed. The convergence and error estimator of the proposed algorithm are proved (Theorem 3.1). Some numerical comparison experiments with existing algorithms are given (Table 1, Fig. 2). For Algorithm 1, the selection of the backtracking steps is discussed. Problems that should be studied in the near future are listed.
Acknowledgements
The authors would like to thank the anonymous referees and editors for their valuable comments and constructive suggestions.
Author contributions
All authors contributed extensively to this manuscript. Jingjing Peng and Siting Yu contributed by writing the main manuscript text. Zhenyun Peng contributed by preparing the figures and tables. All authors read and approved the final manuscript.
Funding information
Research supported by the National Natural Science Foundation of China (11961012) and the Special Research Project for Guangxi Young Innovative Talents (AD20297063).
Data availability
Not applicable.
Declarations
Competing interests
The authors declare that they have no competing interests.
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
1. Jin, Z.X., Zhai, C.B.: On the nonlinear matrix equation . Linear Multilinear Algebra. https://doi.org/10.1080/03081087.2021.1882371
2. Meng, J.; Kim, H.M. The positive definite solution of the nonlinear matrix equation . J. Comput. Appl. Math.; 2017; 322, pp. 139-147.3645824
3. Jia, Z.; Zhao, M.; Wang, M.; Ling, S. Solvability theory and iteration method for one self-adjoint polynomial matrix equation. J. Appl. Math.; 2014; 7, 3208634 681605.
4. Meng, J.; Kim, H.M. The positive definite solution to a nonlinear matrix equation. Linear Multilinear Algebra; 2015; [DOI: https://dx.doi.org/10.1080/03081087.2015.1074650]
5. Ran, A.C.M.; Reurings, M.C.B. On the nonlinear matrix equation : solution and perturbation theory. Linear Algebra Appl.; 2002; 346, pp. 15-26.1897819
6. Ran, A.C.M.; Reurings, M.C.B.; Rodman, L. A perturbation analysis for nonlinear selfadjoint operator equations. SIAM J. Matrix Anal. Appl.; 2006; 28, pp. 89-104.2218944
7. Bushell, P.J. On solutions of the matrix equation . Linear Algebra Appl.; 1974; 8, pp. 465-469.364296
8. Bushell, P.J. Hilbert’s metric and positive contraction mappings in a Banach space. Arch. Ration. Mech. Anal.; 1973; 52, pp. 330-338.336473
9. Lim, Y. Bushell’s equations and polar decompositions. Math. Nachr.; 2009; 282, pp. 1567-1574.2573466
10. Liu, X.; Gao, H. On the positive definite solutions of the matrix matrix equations . Linear Algebra Appl.; 2003; 368, pp. 83-97.1983196
11. Anderson, W.N.; Kleindorfer, G.D.; Kleindorfer, P.R.; Woodroofe, M.B. Consistent estimates of the parameters of a linear system. Ann. Math. Stat.; 1969; 40, pp. 2064-2075.253496
12. Bougerol, P. Kalman filtering with random coefficients and contractions. SIAM J. Control Optim.; 1993; 31, pp. 942-959.1227540
13. Guo, C.H. Convergence rate of an iterative method for nonlinear matrix equations. SIAM J. Control Optim.; 1993; 31, pp. 942-959.1227540
14. Guo, C.H.; Lancaster, P. Iterative solution of two matrix equations. Math. Comput.; 1999; 68, pp. 1589-1603.1651757
15. Abou-Kandil, H.; Freiling, G.; Ionesu, V.; Jank, G. Matrix Riccati Equation in Control and Systems Theory; 2003; Basel, Birkhauser:
16. Lee, H.; Lim, Y. Invariant metrics, contractions and nonlinear matrix equations. Nonlinearity; 2008; 21, pp. 857-878.2399829
17. Jung, C.; Kim, H.M.; Lim, Y. On the solution of the nonlinear matrix equation . Linear Algebra Appl.; 2009; 430, pp. 2042-2052.2503951
18. Lim, Y. Solving the nonlinear matrix equation via a contraction principle. Linear Algebra Appl.; 2009; 430,
19. Duan, X.; Chen, H.; Duan, F. Positive definite solutions of the matrix equation . Numer. Math. J. Chin. Univ.; 2012; 34,
20. Lee, H.; Lim, Y. Invariant metrics, contractions and nonlinear matrix equations. Nonlinearity; 2008; 21,
21. Duan, X.; Liao, A. On the existence of Hermitian positive definite solutions of the matrix equation . Linear Algebra Appl.; 2008; 429, pp. 673-687.2428122
22. Sakhnovich, L.A. Interpolation Theory and Its Applications; 2012; Berlin, Springer:
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.