F. Khaksar Haghani 1 and F. Soleymani 2
Academic Editor:N. Herisanu and Academic Editor:N. I. Mahmudov
1, Department of Mathematics, Shahrekord Branch, Islamic Azad University, Shahrekord, Iran
2, Department of Mathematics, Zahedan Branch, Islamic Azad University, Zahedan, Iran
Received 22 August 2013; Accepted 23 October 2013; 6 February 2014
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction and Preliminary Notes
It is well known that the inverse of a square matrix [figure omitted; refer to PDF] , which is also known as a reciprocal matrix, is a matrix [figure omitted; refer to PDF] such that [figure omitted; refer to PDF] , where [figure omitted; refer to PDF] is the identity matrix. A regular nonsingular matrix [figure omitted; refer to PDF] can be inverted using methods such as the Gauss-Jordan elimination or Gaussian elimination method. Such schemes fall in the category of direct methods for this purpose.
The direct methods cannot properly handle sparse matrices possessing sparse inverses arising in the numerical solution of integral equations [1]. On the other hand, methods such as conjugate gradient for symmetric positive definite matrices and GMRES are effective for large sparse linear systems. However, there is a problem when the coefficient matrix (when solving a linear system of equations) is ill-conditioned. To remedy this, one may apply a preconditioner to the system in which its construction is not an easy task [2].
An iterative method for preconditioning is the SPAI (sparse approximate inverse preconditioner) algorithm [3]. Given a sparse matrix [figure omitted; refer to PDF] the SPAI algorithm computes a sparse approximate inverse [figure omitted; refer to PDF] by minimizing [figure omitted; refer to PDF] in the Frobenius norm. Then, the approximate inverse is computed explicitly and can be applied as a preconditioner to an iterative method.
There are other types of schemes, which can be considered as iteration methods while they have different structures; see, for example, [4, 5]. In such iterative methods, at each iteration an approximate inverse of a matrix (if it is rectangular, one can find Moore-Penrose inverse) may easily be attained. And consequently, the users have the ability to solve the linear systems (with multiple right-hand side vectors) iteratively or use the approximate inverses in sensitivity analysis and the preconditioning of a linear system. This type of methods is in focus here.
Several known methods were proposed for approximating matrix inverse, such as those based on the so-called minimum residual iterations and Hotelling-Bodewig algorithm [6]. The Hotelling-Bodewig algorithm is defined by [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the identity matrix. Schulz in [7] found that the eigenvalues of [figure omitted; refer to PDF] must have magnitudes less than 1 to ensure the convergence, which is a key element in designing higher efficient Schulz-type iterative methods.
In 2011, Li et al. in [8] theoretically investigated [figure omitted; refer to PDF] and also proposed another third-order iterative method for finding [figure omitted; refer to PDF] as follows: [figure omitted; refer to PDF]
The iterative method (2) can also be found in Chapter 5 of the textbook [9]. As an another example from this primary source, the authors provided the following twelfth-order method: [figure omitted; refer to PDF] in which [figure omitted; refer to PDF] . For further reading, one may refer to [10-12].
In this paper, we will propose an efficient iterative method for finding [figure omitted; refer to PDF] numerically. The theoretical convergence of the method will also be studied. We also discuss the application of the new scheme in finding Moore-Penrose inverse (also known as pseudoinverse) for rectangular or singular matrices. It is also proven analytically that the new method has asymptotical stability in general. Some large-scale sparse matrices will be taken into account as some examples to put on show a clear reduction of the execution time when the new algorithm is applied.
The rest of the paper is organized as follows. The main contribution of this paper is given in Sections 2-3. Section 2 is devoted to the analysis of convergence which shows that the method can be considered for the pseudoinverse as well. Section 3 thoroughly and for the first time studies the stability of this high-order Schulz-type iterative method for finding generalized inverses. Section 4 covers the matter of initial guess/value in order to preserve the convergence order. Subsequently, the method is examined in Section 5 numerically. And finally, concluding remarks are presented in Section 6.
2. A New Method and Its Convergence Study
By applying the following nonlinear equation solver (to see the new developments on root-finding methods, refer to [13]) [figure omitted; refer to PDF] on the nonlinear matrix equation [figure omitted; refer to PDF] , we obtain a fixed-point iteration for matrix inversion using [figure omitted; refer to PDF] as follows: [figure omitted; refer to PDF] Simplifying (6) by proper factorizing yields [figure omitted; refer to PDF] wherein the sequence of iterates [figure omitted; refer to PDF] converges to [figure omitted; refer to PDF] for a good initial value. Such a guess, [figure omitted; refer to PDF] , will be discussed in Section 4.
Theorem 1.
Let [figure omitted; refer to PDF] be a nonsingular real or complex matrix. If the initial approximation [figure omitted; refer to PDF] satisfies [figure omitted; refer to PDF] then the iterative method (7) converges with at least twelfth order to [figure omitted; refer to PDF] .
Proof.
Let [figure omitted; refer to PDF] , and for the sake of simplicity assume that [figure omitted; refer to PDF] and [figure omitted; refer to PDF] stand for the symmetric residual matrix. It is straightforward to have [figure omitted; refer to PDF] Hence, we attain [figure omitted; refer to PDF] In addition, since [figure omitted; refer to PDF] , by relation (10) and using mathematical induction, we obtain that [figure omitted; refer to PDF] . If we consider [figure omitted; refer to PDF] , therefore [figure omitted; refer to PDF] Furthermore, we get that [figure omitted; refer to PDF] That is, [figure omitted; refer to PDF] , when [figure omitted; refer to PDF] , and thus [figure omitted; refer to PDF] , as [figure omitted; refer to PDF] .
Now, we must show the twelfth order using the sequence [figure omitted; refer to PDF] . To do this, we denote [figure omitted; refer to PDF] as the error matrix in the iterative procedure (7). We have [figure omitted; refer to PDF] Hence, we could get that [figure omitted; refer to PDF] By multiplying [figure omitted; refer to PDF] by the left side, we have [figure omitted; refer to PDF] which now by taking an arbitrary matrix norm results in [figure omitted; refer to PDF] And thus [figure omitted; refer to PDF] That is, the iteration (7) converges with at least twelfth order to [figure omitted; refer to PDF] . This concludes the proof.
At this time, we discuss an application of (7) for finding the generalized inverses. The Moore-Penrose inverse of a complex matrix [figure omitted; refer to PDF] (also called pseudoinverse), denoted by [figure omitted; refer to PDF] , is a unique matrix [figure omitted; refer to PDF] satisfying the following four Penrose equations: [figure omitted; refer to PDF] wherein [figure omitted; refer to PDF] is the conjugate transpose of [figure omitted; refer to PDF] . Ben-Israel and his colleagues in [14, 15] used the method (1) with the starting value [figure omitted; refer to PDF] where [figure omitted; refer to PDF] and [figure omitted; refer to PDF] denotes the spectral radius. The authors in [15] further investigated that the sequence generated by [figure omitted; refer to PDF] converges to the pseudoinverse.
In the following theorem, we show analytically that in case of having singular or rectangular matrices, scheme (7) by considering the initial approximation (19) converges to the Moore-Penrose generalized inverse.
Theorem 2.
For the sequence [figure omitted; refer to PDF] generated by the iterative Schulz-type method (7), and any [figure omitted; refer to PDF] , it holds that [figure omitted; refer to PDF]
Proof.
We will prove the conclusion by induction on [figure omitted; refer to PDF] . For [figure omitted; refer to PDF] , and considering (19), the first two equations of (21) can be demonstrated simply. And thus, we only give a verification to the last two equations as follows: [figure omitted; refer to PDF] Assume now that the conclusion holds for some [figure omitted; refer to PDF] . We now show that it continues to hold for [figure omitted; refer to PDF] . Using the iterative method (7), one has [figure omitted; refer to PDF] where the following fact [figure omitted; refer to PDF] has been used. Thus, the first equality in (21) holds for [figure omitted; refer to PDF] , and the second equality can be proved in a similar way. For the third equality in (21), using the assumption that [figure omitted; refer to PDF] and the iterative method (7), we could write down [figure omitted; refer to PDF] Hence, the third equality in (21) holds for [figure omitted; refer to PDF] . The fourth equality can similarly be proved, and the desired result follows.
The iterative method (7) is a matrix multiplication rich scheme. So, in order to reduce the computational load of matrix multiplications, it is enough to use SparseArray[mat] to avoid unnecessary multiplications to the nonzero elements when dealing with large sparse matrices.
Remark 3.
The (inverse finder) informational efficiency index is defined as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] and [figure omitted; refer to PDF] stand for the local order and the number of matrix-matrix products per cycle. The proposed method of this paper requires 8 matrix-matrix multiplications to achieve the convergence order 12. This implies that [figure omitted; refer to PDF] as its informational index, which is much better than [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] of schemes (1), (2), (3), and (4), respectively.
Before stating the main theorem for finding Moore-Penrose inverse, it is required to recall that for [figure omitted; refer to PDF] with the singular values [figure omitted; refer to PDF] and the initial approximation [figure omitted; refer to PDF] with [figure omitted; refer to PDF] , it holds that [figure omitted; refer to PDF] We are about to use this fact in the following theorem so as to find the theoretical order of the proposed method (7) for finding the Moore-Penrose inverse (see [16] for more details).
Theorem 4.
For [figure omitted; refer to PDF] , with the singular values [figure omitted; refer to PDF] , the sequence [figure omitted; refer to PDF] generated by (7) and using the initial approximation [figure omitted; refer to PDF] converges to the Moore-Penrose inverse [figure omitted; refer to PDF] in twelfth order provided that [figure omitted; refer to PDF] .
Proof.
Set [figure omitted; refer to PDF] and [figure omitted; refer to PDF] ; we have [figure omitted; refer to PDF] On the other hand, from the definitions of the Moore-Penrose inverse [figure omitted; refer to PDF] , we have [figure omitted; refer to PDF] The use of these relations implies that [figure omitted; refer to PDF] So, for any matrix norm [figure omitted; refer to PDF] , we obtain [figure omitted; refer to PDF] Applying (25), which implies that [figure omitted; refer to PDF] , and a similar reasoning as in (10)-(12), one can obtain [figure omitted; refer to PDF]
Finally, using the properties of the Moore-Penrose inverse [figure omitted; refer to PDF] and Theorem 2, it would be now easy to find the error inequality of the new scheme (7) as follows: [figure omitted; refer to PDF] Thus, [figure omitted; refer to PDF] ; that is, the sequence of (7) converges to the Moore-Penrose inverse in twelfth order as [figure omitted; refer to PDF] . This ends the proof.
3. Stability
We investigate the stability of (7) for finding [figure omitted; refer to PDF] (or the simplified case [figure omitted; refer to PDF] ) in a neighborhood of the solution of equation [figure omitted; refer to PDF] . Note that if the iteration is not self-correcting, that is, if errors made at one stage are not subsequently damped, then the inevitable rounding errors introduced into the iteration may accumulate to the point where they overwhelm the answer. Thus, we should either show that the proposed method is self-correcting or must furnish an analysis showing that rounding errors remain under control. This will be done in what follows. In fact, we analyze how a small perturbation at the [figure omitted; refer to PDF] th iterate is amplified or damped along the iterates. Note that this procedure has recently been applied on a general family of methods for matrix inversion in [17].
Theorem 5.
The sequence [figure omitted; refer to PDF] generated by (7) with the initial approximation (19) is asymptotically stable for finding the Moore-Penrose generalized inverse.
Proof.
Let [figure omitted; refer to PDF] be the numerical perturbation introduced at the [figure omitted; refer to PDF] th iterate of (7). Next, one has [figure omitted; refer to PDF] Here, we perform a first-order error analysis; that is, we formally neglect quadratic or higher terms such as [figure omitted; refer to PDF] . This formal manipulation is meaningful if [figure omitted; refer to PDF] is sufficiently small and further yields to [figure omitted; refer to PDF] . We have [figure omitted; refer to PDF] where [figure omitted; refer to PDF] , [figure omitted; refer to PDF] has been used, since they are very close to the zero (matrix). After some algebraic manipulation and using [figure omitted; refer to PDF] , we have [figure omitted; refer to PDF] by using the fact that for enough large [figure omitted; refer to PDF] , we have [figure omitted; refer to PDF] . We attain [figure omitted; refer to PDF] From (35), we can conclude that the perturbation at the iterate [figure omitted; refer to PDF] , is bounded. Therefore, the sequence [figure omitted; refer to PDF] generated by (7) is asymptotically stable. This ends the proof.
Corollary 6.
By using the matrix identity in (27), it would be possible to further simplify bound (35) as follows: [figure omitted; refer to PDF] Using (36) recursively, one may attain the following very simple bound: [figure omitted; refer to PDF]
Remark 7.
In case of finding the regular inverse of nonsingular matrices, that is, when [figure omitted; refer to PDF] , according to (37), we have [figure omitted; refer to PDF] , and so the matrix method is strongly numerically stable. Consequently, in case of finding the [figure omitted; refer to PDF] , the matrix method (7), is asymptotically stable. Of course, since the iteration is not self-correcting in the general case, proceeding beyond convergence may cause a serious increase in error.
4. Initial Value
The iterative methods that have been discussed up to now are sensitive upon choosing the initial guess/value to start the process. As a matter of fact, the high accuracy and efficiency of such types of iterative algorithms are guaranteed only if the initial value satisfies the appropriate condition given in Theorem 1. Thus, in order to preserve the convergence order, we present some ways from the literature to remedy this flaw, although an efficient way for square or rectangular matrices is the way (19).
For a symmetric positive definite (SPD) matrix [figure omitted; refer to PDF] , one can easily use the Householder-John Theorem to attain [figure omitted; refer to PDF] as the initial value, wherein the matrix [figure omitted; refer to PDF] is any of the matrices such that [figure omitted; refer to PDF] is SPD [8].
If the square matrix [figure omitted; refer to PDF] is diagonally dominant, one may apply the approach given in [18] and use [figure omitted; refer to PDF] wherein [figure omitted; refer to PDF] is the diagonal entry of [figure omitted; refer to PDF] . Note that this choice is so much fruitful in solving PDEs resulting from discretizations. Some further generalizations of such an initial matrix are given in [19].
Although the two abovementioned ways are efficient, they cannot be applied for finding an initial guess/value to the inverse of general input matrices. For instance, for large-scale matrices which do not satisfy the above structures, they may fail to provide the convergence. Hence, we here take into account the suboptimal way of producing [figure omitted; refer to PDF] as given by Pan and Schreiber in [20] as follows: [figure omitted; refer to PDF]
We should note that choosing the initial value as given above can satisfy the necessary condition of arriving to the convergence phase. Some ways for updating the initial matrix for sparse matrices are brought forward by [21].
In what follows, we provide an algorithm for improving an initial matrix for square matrices rapidly. In fact, the derivation of LU factorization for almost all kinds of square nonsingular matrices could be done rapidly in the linear algebra programming packages. In the Mathematica, the one argument command LinearSolve[ ] provides an LU factorization of [figure omitted; refer to PDF] too much fast. Then, by applying the LU decomposition on the columns of a identity matrix recursively, one could update the columns of a derived initial matrix produced by other strategies such as (40). Such a procedure is illustrated in Algorithm 1. We summarized this idea as in Algorithm 2.
Algorithm 1: An algorithm for constructing a rapid and robust initial approximation for [figure omitted; refer to PDF] .
(1) Given [figure omitted; refer to PDF]
(2) construct the matrix [figure omitted; refer to PDF]
(3) obtain the LU decomposition of the matrix [figure omitted; refer to PDF] , for example L U = LinearSolve[A]
(4) for [figure omitted; refer to PDF] , and when [figure omitted; refer to PDF] is the [figure omitted; refer to PDF] th column of the identity matrix
(5) update the columns of [figure omitted; refer to PDF] by finding the columns of the real [figure omitted; refer to PDF] as follows: [figure omitted; refer to PDF]
[figure omitted; refer to PDF] end for.
Algorithm 2: Two-argument function written in the Mathematica environment.
i nitial1 [figure omitted; refer to PDF] A_, num_ [figure omitted; refer to PDF] [figure omitted; refer to PDF]
Quiet@Module [figure omitted; refer to PDF] n = Dimensions [figure omitted; refer to PDF] A [figure omitted; refer to PDF] , i = 1, LU = LinearSolve [figure omitted; refer to PDF] A [figure omitted; refer to PDF] ,
Id = SparseArray [figure omitted; refer to PDF] k_, k_} -> 1.}, {n, n [figure omitted; refer to PDF] ;
mat = (1/Norm [figure omitted; refer to PDF] A, "Frobenius" [figure omitted; refer to PDF] )*ConjugateTranspose [figure omitted; refer to PDF] A [figure omitted; refer to PDF] ;
While [figure omitted; refer to PDF] i < = num, {ith = LU [figure omitted; refer to PDF] Id [figure omitted; refer to PDF] All, n + 1 - i [figure omitted; refer to PDF] ;
mat [figure omitted; refer to PDF] All, n + 1 - i [figure omitted; refer to PDF] = ith; i++; [figure omitted; refer to PDF] ; mat [figure omitted; refer to PDF] ;
initial2 [figure omitted; refer to PDF] A_, num_ [figure omitted; refer to PDF] [figure omitted; refer to PDF] Quiet@Module [figure omitted; refer to PDF] n = Dimensions [figure omitted; refer to PDF] A [figure omitted; refer to PDF] , i = 1},
Id = SparseArray [figure omitted; refer to PDF] k_, k _} - > 1 .}, {n, n}, 0 [figure omitted; refer to PDF] ; LU = LinearSolve [figure omitted; refer to PDF] A [figure omitted; refer to PDF] ;
mat = Quiet[initial1 [figure omitted; refer to PDF] A, num [figure omitted; refer to PDF] ;
While [figure omitted; refer to PDF] i <= num, {ith = LU [figure omitted; refer to PDF] Id [figure omitted; refer to PDF] All, i [figure omitted; refer to PDF] ;
mat [figure omitted; refer to PDF] All, i [figure omitted; refer to PDF] = ith; i++; [figure omitted; refer to PDF] ; mat [figure omitted; refer to PDF] ;
The initial1[A_,num_] takes the nonsingular matrix [figure omitted; refer to PDF] and the number of columns that users wish to update from the real inverse into the approximate inverse from the left, while the function initial2[A_,num_] works doubly. That is, if, for example, num=10, it updates the first and last 10 columns of the approximate inverse as rapidly as possible.
Next, we conduct some numerical tests to support the theoretical results given in Section 2 using the initial value discussed herein.
5. Computational Aspects
In this section, some experiments are presented to demonstrate the capability of the proposed method. The programming package mathematica 8 [22] has been used in the demonstrations. We work on the numerical aspects of the methods in machine precision.
It is clear that large sparse matrices cannot be handled easily and their storage needs to be done in sparse form as in the input form to be accessible and economic in real applications. Methods like (7) are powerful in finding an approximate inverse or a robust approximate inverse preconditioner in low number of steps and computational time, in which the output form of the approximate inverse is also sparse .
In this paper, as the programs were running, we found the running time using the command AbsoluteTiming[ ] to report the elapsed CPU time (in seconds) for this experiment. In this paper, the computer specifications are Microsoft Windows XP Intel(R), Pentium(R) 4, and CPU 3.20 GHz, with 4 GB of RAM.
Experiment 1.
This test is devoted to the application of the Schulz-type iterative methods in finding the pseudoinverse of 30 large random complex matrices defined as shown in Algorithm 3 ( [figure omitted; refer to PDF] ).
Algorithm 3
m = 1500; k = 1800; number = 30; SeedRandom [figure omitted; refer to PDF] ;
Table [figure omitted; refer to PDF] A [figure omitted; refer to PDF] l [figure omitted; refer to PDF] = SparseArray [figure omitted; refer to PDF] Band [figure omitted; refer to PDF] 400, 10}, {m, k [figure omitted; refer to PDF] -> Random [figure omitted; refer to PDF] - I,
Band [figure omitted; refer to PDF] 10, 200}, {m, k [figure omitted; refer to PDF] -> {1.1, -Random [figure omitted; refer to PDF] ,
Band [figure omitted; refer to PDF] -60, 100 [figure omitted; refer to PDF] -> -3.02, Band [figure omitted; refer to PDF] -100, 500 [figure omitted; refer to PDF] -> 3.1 I}, {m, k}, 0. [figure omitted; refer to PDF] ;, {l, number [figure omitted; refer to PDF] ;
Id = SparseArray [figure omitted; refer to PDF] i _ , i _} -> 1.}, {m, m}, 0. [figure omitted; refer to PDF] ;
The results of comparisons for these random matrices of the size [figure omitted; refer to PDF] are reported in Figures 1, 2, and 3 in terms of the number of iterations and the computational time. The compared methods are (1) denoted by "Schulz," (2) denoted by "Chebyshev," (4) denoted by "KMS," and the new iterative scheme (7) denoted by "PM." A saving in the elapsed time by considering the stopping criteria as [figure omitted; refer to PDF] and [figure omitted; refer to PDF] can be observed for the studied method (7). In this test, the initial matrix has been computed for each random matrix by V0 [j]=ConjugateTranspose[A[j]] *(1./((SingularValueList[A[j],1][[1]])2 )), while the maximum number of iterations is set to 100.
Figure 1: The results of comparisons for Experiment 1 in terms of the number of iterations.
[figure omitted; refer to PDF]
Figure 2: The results of comparisons for Experiment 1 in terms of the elapsed time using [figure omitted; refer to PDF] .
[figure omitted; refer to PDF]
Figure 3: The results of comparisons for Experiment 1 in terms of the elapsed time using [figure omitted; refer to PDF] .
[figure omitted; refer to PDF]
6. Concluding Remarks
It is well known that matrix inverse and generalized inverse are important in applied fields of nature science, such as the solution to various systems of linear and nonlinear equations, eigenvalue problems, and the linear least square problems. Iterative methods are often effective especially for large-scale systems with sparsity and Schulz-type methods are great tools for preconditioning such systems or in finding pseudoinverses.
Hotelling-Bodewig algorithm is simple to describe and analyze and is numerically stable. This was the idea of developing an iterative method of this type in this paper.
In this paper, we have shown that the suggested method (7) reaches twelfth order of convergence. The stability of the new method was also studied in detail and established that the new scheme is asymptotically stable. The efficacy of the new scheme was illustrated numerically in Section 5. Finally, and based on the numerical results obtained, one can conclude that the presented method is useful. Further extensions of the new scheme for other generalized inverses (such as the ones in [23, 24]) can be done for future works.
Acknowledgment
The research of the first author (F. Khaksar Haghani) is financially supported by Shahrekord Branch, Islamic Azad University, Shahrekord, Iran.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
[1] I. V. Oseledets, E. E. Tyrtyshnikov, "Approximate inversion of matrices in the process of solving a hypersingular integral equation," Computational Mathematics and Mathematical Physics , vol. 45, no. 2, pp. 315-326, 2005.
[2] Y. Saad Iterative Methods for Sparse Linear Systems , SIAM, 2003., 2nd.
[3] http://www.computational.unibas.ch/software/spai/spaidoc.html
[4] S. K. Sen, S. S. Prabhu, "Optimal iterative schemes for computing the Moore-Penrose matrix inverse," International Journal of Systems Science , vol. 7, no. 8, pp. 847-852, 1976.
[5] F. Soleymani, P. S. Stanimirovic, "A higher order iterative method for computing the Drazin inverse," The Scientific World Journal , vol. 2013, 2013.
[6] H. Hotelling, "Analysis of a complex of statistical variables into principal components," Journal of Educational Psychology , vol. 24, no. 7, pp. 498-520, 1933.
[7] G. Schulz, "Iterative berechnung der reziproken matrix," Zeitschrift für Angewandte Mathematik und Mechanik , vol. 13, no. 1, pp. 57-59, 1933.
[8] H.-B. Li, T.-Z. Huang, Y. Zhang, X.-P. Liu, T.-X. Gu, "Chebyshev-type methods and preconditioning techniques," Applied Mathematics and Computation , vol. 218, no. 2, pp. 260-270, 2011.
[9] E. V. Krishnamurthy, S. K. Sen Numerical Algorithms: Computations in Science and Engineering , Affiliated East-West Press, New Delhi, India, 1986.
[10] X. Liu, S. Huang, "Proper splitting for the generalized inverse A T , S ( 2 ) and its application on Banach spaces," Abstract and Applied Analysis , vol. 2012, 2012.
[11] I. Pavaloiu, E. Catina, "Remarks on some Newton and Chebyshev-type methods for approximation eigenvalues and eigenvectors of matrices," Computer Science Journal of Moldova , vol. 7, pp. 3-17, 1999.
[12] A. R. Soheili, F. Soleymani, M. D. Petkovic, "On the computation of weighted Moore-Penrose inverse using a high-order matrix method," Computers and Mathematics with Applications , vol. 66, no. 11, pp. 2344-2351, 2013.
[13] F. Soleymani, D. K. R. Babajee, "Computing multiple roots using a class of quartically convergent methods," Alexandria Engineering Journal , vol. 52, no. 3, pp. 531-541, 2013.
[14] A. Ben-Israel, D. Cohen, "On iterative computation of generalized inverses and associated projections," SIAM Journal on Numerical Analysis , vol. 3, pp. 410-419, 1966.
[15] A. Ben-Israel, T. N. E. Greville Generalized Inverses , Springer, 2003., 2nd.
[16] F. Toutounian, F. Soleymani, "An iterative method for computing the approximate inverse of a square matrix and the Moore-Penrose inverse of a non-square matrix," Applied Mathematics and Computation , vol. 224, pp. 671-680, 2013.
[17] F. Soleymani, P. S. Stanimirovic, "A note on the stability of a pth order iteration for finding generalized inverses," Applied Mathematics Letters , vol. 28, pp. 77-81, 2014.
[18] L. Grosz, "Preconditioning by incomplete block elimination," Numerical Linear Algebra with Applications , vol. 7, no. 7-8, pp. 527-541, 2000.
[19] L. Gonzalez, A. Suarez, "Improving approximate inverses based on Frobenius norm minimization," Applied Mathematics and Computation , vol. 219, no. 17, pp. 9363-9371, 2013.
[20] V. Y. Pan, R. Schreiber, "An improved Newton iteration for the generalized inverse of a matrix with applications," SIAM Journal on Scientific and Statistical Computing , vol. 12, no. 5, pp. 1109-1131, 1991.
[21] X. Cui, K. Hayami, "Generalized approximate inverse preconditioners for least squares problems," Japan Journal of Industrial and Applied Mathematics , vol. 26, no. 1, pp. 1-14, 2009.
[22] S. Wolfram The Mathematica Book , Wolfram Media, 2003., 5th.
[23] F. Soleymani, P. S. Stanimirovic, M. Z. Ullah, "On an accelerated iterative method for weighted Moore-Penrose inverse," Applied Mathematics and Computation , vol. 222, pp. 365-371, 2013.
[24] M. Z. Ullah, F. Soleymani, A. S. Al-Fhaid, "An efficient matrix iteration for computing weighted Moore-Penrose inverse," Applied Mathematics and Computation , vol. 226, pp. 441-454, 2014.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2014 F. Khaksar Haghani and F. Soleymani. F. Khaksar Haghani et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
A stable numerical method is proposed for matrix inversion. The new method is accompanied by theoretical proof to illustrate twelfth-order convergence. A discussion of how to achieve the convergence using an appropriate initial value is presented. The application of the new scheme for finding Moore-Penrose inverse will also be pointed out analytically. The efficiency of the contributed iterative method is clarified on solving some numerical examples.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer