(ProQuest: ... denotes non-US-ASCII text omitted.)
Li-Tao Zhang 1, 2 and Jian-Lei Li 3 and Tong-Xiang Gu 2 and Xing-Ping Liu 2
Academic Editor:Shuqian Shen
1, Department of Mathematics and Physics, Zhengzhou Institute of Aeronautical Industry Management, Zhengzhou, Henan 450015, China
2, Laboratory of Computationary Physics, Institute of Applied Physics and Computational Mathematics, P.O. Box 8009, Beijing 100088, China
3, College of Mathematics and Information Science, North China University of Water Resources and Electric Power, Zhengzhou, Henan 450011, China
Received 9 May 2014; Accepted 30 June 2014; 17 July 2014
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
For solving the large sparse linear system [figure omitted; refer to PDF] where A is an n×n real nonsingular matrix and x,b∈Rn , an iterative method is usually considered. The concept of matrix multisplitting solution of linear system was introduced by O'Leary and White [1] and further studied by many other authors. Frommer and Mayer [2] studied extrapolated relaxed matrix multisplitting methods and matrix multisplitting SOR method. Mas et al. [3] analyzed nonstationary extrapolated relaxed and asynchronous relaxed methods. Gu et al. [4, 5] further studied relaxed nonstationary two-stage matrix multisplitting methods and the corresponding asynchronous schemes.
As we know, matrix multisplitting iterative method for linear systems takes two basic forms. When all of the processors wait until they are updated with the results of the current iteration, it is synchronous. That is to say, when they begin the next iteration of asynchronous, they may act more or less independently of each other and use possibly delayed iterative values of the output of the other processors. In view of the potential time saving inherent in them, asynchronous iterative methods, or chaotic as they are often called, have attracted much attention since the early paper of Chazan and Miranker [6] introduced them in the context of point iterative schemes. Naturally, a number of convergence results [7-21] have been obtained. Recently, the convergence of three relaxed matrix multisplitting chaotic AOR methods has been investigated in [11, 13, 14].
A collection of triples (Ml ,Nl ,El ) , l=1,2,...,s , is called a multisplitting of A if A=Ml -Nl is a splitting of A for l=1,2,...,s , and El 's, called weighting matrices, are nonnegative diagonal matrices such that ∑l=1sEl =I.
The unsymmetric accelerated overrelaxation (USAOR) method was proposed in [22], if the diagonal elements of the matrix A are nonzero. Without the loss of generality, let the matrix A be split as [figure omitted; refer to PDF] where I , L , and U are nonsingular diagonal, strictly lower triangular and upper triangular parts of A . The iterative scheme of the USAOR method is defined by [figure omitted; refer to PDF] that is, [figure omitted; refer to PDF] where [figure omitted; refer to PDF] and αi ,βi (i=1,2) are real parameters.
In this paper, we will extend the point USAOR iterative method to matrix multisplitting chaotic generalized USAOR method and analyze their convergence for H -matrices and irreducible diagonally dominant matrices.
Based on matrix multisplitting chaotic generalized AOR method [23], the matrix multisplitting chaotic generalized USAOR method is given by [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is an iteration matrix and [varphi]=diag...(β1 ,β2 ,...,βn ) , ψ=diag...(γ1 ,γ2 ,...,γn ) with βi ...5;0 , γi ...5;0 (i=1,2,...,n) and α , β real parameters.
Remark 1.
For β=γi =0 , i=1,2,...,n , then GUSAOR method reduces to GAOR method [11]; For [varphi]=ω1 I , α=ω2 /ω1 , ψ=ω3 I , β=ω4 /ω3 , the GUSAOR method reduces to the USAOR method [22].
The remainder of this paper is organized as follows. In Section 2, we introduce some notations and preliminaries. In Section 3, we present relaxed matrix parallel multisplitting chaotic GUSAOR-style methods for solving large nonsingular system, when the coefficient matrix A is an H -matrix or irreducible diagonally dominant matrix, and analyze the convergence of our methods. In Section 4, we further study some applied convergence results of methods to be convenient for carrying out numerical experiments. In Section 5, we give some numerical examples, which show that our convergence results are easily carried out. Finally, we draw some conclusions.
2. Notation and Preliminaries
We will use the following notation. Let C=(cij )∈Rn×n be an n×n matrix. By diag(C ) we denote the diagonal matrix coinciding in its diagonal with C . For A=(aij ) , B=(bij )∈Rn×n , we write A...5;B if aij ...5;bij holds for all i,j=1,2,...,n . Calling A nonnegative if A...5;0 , we say that B...4;C if and only if -B...5;-C . These definitions carry immediately over to vectors by identifying them with n×1 matrices. By |A|=(|aij |) we define the absolute value of A∈Rn×n . We denote by Y9;AYA;=(Y9;aij YA;) the comparison matrix of A∈Rn×n where Y9;aij YA;=|aij | for i=j and Y9;aij YA;=-|aij | for i...0;j , i,j=1,2,...,n . Spectral radius of a matrix A is denoted by ρ(A) . It is well known that if A...5;0 and there exists a vector x>0 such that Ax<αx , then ρ(A)<α .
Definition 2 (see [24]).
Let A∈Rn×n . It is called an
(1) L -matrix if aii >0 for i=1,2,...,n , and aij ...4;0 for i...0;j , i,j=1,2,...,n ;
(2) M -matrix if it is a nonsingular L -matrix satisfying A-1 ...5;0 ;
(3) H -matrix if Y9;AYA; is an M -matrix.
Lemma 3 (see [13]).
If A is an H -matrix, then
(1) |A-1 |...4;Y9;AYA;-1 ;
(2) there exists a diagonal matrix P whose diagonal entries are positive such that Y9;AYA;Pe>0 with e=(1,1,...,1)T .
Lemma 4 (see [13]).
Let A be an M -matrix and let the splitting A=M-N be an M -splitting. If P is the diagonal matrix defined in Lemma 3, then ||P-1M-1 NP||∞ <1 .
Finally, a sequence of sets Pi with Pi ⊆{1,...,s} is admissible if every integer 1,2,...,s appears infinitely often in the Pi , while such an admissible sequence is regulated if there exists a positive integer T such that each of the integers appears at least once in any T consecutive sets of the sequence.
3. Relaxed Matrix Multisplitting Parallel Chaotic Methods
Using the given models in [7, 11, 13] and (4), we may describe the corresponding three algorithms of relaxed matrix multisplitting chaotic GUSAOR-style methods, which are as follows.
Algorithm 5 (given the initial vector).
For k=0,1,... , until convergence. Parallel computing the following iterative scheme [figure omitted; refer to PDF] with α...5;0 , β...5;0 , βi >0 , γi >0 , and μl,k ...5;1 , where (HGUSAOR)lμl,k is the μl,k th composition of the affine mapping satisfying [figure omitted; refer to PDF]
Remark 6.
In Algorithm 5, by using a suitable positive relaxed parameter ω , then we can get the following relaxed Algorithm 7.
Algorithm 7 (given the initial vector).
For k=0,1,... , until convergence. Parallel computing the following iterative scheme [figure omitted; refer to PDF] with ω>0 , α...5;0 , β...5;0 , βi >0 , γi >0 , and μl,k ...5;1 , where (HGUSAOR)lμl,k (x(k) ) is defined such as Algorithm 5.
Remark 8.
In Algorithm 7, we assume that the index sequence {Pi } is admissible and regulated; then we can get the following Algorithm 9 with the case of relaxed chaotic GUSAOR method.
Algorithm 9 (given the initial vector).
For k=0,1,... , until convergence. Parallel computing the following iterative scheme [figure omitted; refer to PDF] with ω>0 , α...5;0 , β...5;0 , βi >0 , γi >0 , μl,k ...5;1 , and Pi ⊆{1,...,s} , where (HGUSAOR)l (x(k) ) of Algorithm 5 are replaced by (HGUSAOR)l (x(k-rk +1) ) and x(k-rk +1) =(x1(k-r(1,k)) ,x2(k-r(2,k)) ,...,xn(k-r(n,k)))T .
Remark 10.
Relaxed matrix multisplitting chaotic GUSAOR-style methods introduce more relaxed factors, so our methods are the generalization of [5, 12, 13]. The parameters can be adjusted suitably so that the convergence property of method can be substantially improved.
Using Lemmas 3 and 4, we can get the following convergence result according to Algorithm 5.
Theorem 11.
Let A∈Rn×n be an H -matrix and (I-Ll ,Ul ,El ) , l=1,2,...,s , a multisplitting of A . Assume that for l=1,2,...,s , we have the following.
(1) Ll are the strictly lower triangular matrices and Ul are the matrices such that the equalities A=I-Ll -Ul hold.
(2) Y9;AYA;=|I|-|Ll |-|Ul |=|I|-|B| , where |B|=|Ll |+|Ul | .
Then the sequence {x(k) } generated by Algorithm 5 converges for any initial x(0) if and only if (α,β,βi ,γi ) , i=1,2,...n∈W1 , ρ=(|B|) , where [figure omitted; refer to PDF]
Proof.
Define the iteration matrix in Algorithm 5 [figure omitted; refer to PDF] where [figure omitted; refer to PDF] Obviously, we have to find a constant σ with 0...4;σ<1 and some norm, which are independent of k , such that for k...5;1 , ||HGUSAOR (α,β,[varphi],ψ)k ||...4;σ .
We first note that the matrices I-α[varphi]Ll and I-βψUl are H -matrices for l=1,2,...,s . Thus by Lemma 3 we have the following inequalities: [figure omitted; refer to PDF] From this relation, it follows that [figure omitted; refer to PDF]
Case 1 . Let 0<βi ...4;1 , 0<γi ...4;1 , i=1,2,...,n . Then [figure omitted; refer to PDF] Define [figure omitted; refer to PDF] So, for l=1,2,...,s , we have the following relations [figure omitted; refer to PDF]
Since for l=1,2,...,s , Ml1 (β,ψ) and Ml2 (α,[varphi]) are both M -matrices and Nl1 (β,ψ)...5;0 , Nl2 (α,[varphi])...5;0 , the splittings Ml1 (β,ψ)-Nl1 (β,ψ) and Ml2 (α,[varphi])-Nl2 (α,[varphi]) are M -splittings of ψ(I-|B|) and [varphi](I-|B|) , respectively, which are M -matrices. So, from Lemma 4 we may complete the proving courses, which are listed subsequently.
Case 2 . Let 1<βi <2/(1+ρ) , 1<γi <2/(1+ρ) , i=1,2,...,n .
Assume that βmax... =max...1...4;i...4;n {βi } , γmax... =max...1...4;i...4;n {γi } . Then we have [figure omitted; refer to PDF] Similar to Case 1 , we define [figure omitted; refer to PDF] Then [figure omitted; refer to PDF]
It is clear, (2-β)I-β|B| and (2-α)I-α|B| are both M -matrices. Since, for l=1,2,...,s , Ml1 (β,ψ) and Ml2 (α,[varphi]) are M -matrices, Nl1 (β,ψ)...5;0 and Nl2 (α,[varphi])...5;0 , the splittings Ml1 (β,ψ)-Nl1 (β,ψ) and Ml2 (α,[varphi])-Nl2 (α,[varphi]) are M -splittings of the matrices (2-β)I-β|B| and (2-α)I-α|B| , respectively.
Thus, for Cases 1 and 2 , from Lemma 4, we have [figure omitted; refer to PDF] So [figure omitted; refer to PDF] which implies [figure omitted; refer to PDF] If we define [figure omitted; refer to PDF] Then we have [figure omitted; refer to PDF]
According to Algorithm 7, we can also get the following result.
Theorem 12.
Let A∈Rn×n be an H -matrix and (I-Ll ,Ul ,El ) , l=1,2,...,s , a multisplitting of A . Assume that for l=1,2,...,s , we have the following.
(1) Ll are the strictly lower triangular matrices and Ul are the matrices such that the equalities A=I-Ll -Ul hold.
(2) Y9;AYA;=|I|-|Ll |-|Ul |=|I|-|B| , where |B|=|Ll |+|Ul | .
(3) P is diagonal matrix defined in Lemma 3 and Ml1 (β,ψ) , Nl1 (β,ψ) , Ml2 (α,[varphi]) , and Nl2 (α,[varphi]) in Theorem 11.
Then the sequence {x(k) } generated by Algorithm 7 converges for any initial x(0) if and only if (α,β,βi ,γi ,ω) , i=1,2,...,n∈W2 , with ρ=(|B|) and θ=max...1...4;l...4;s {||P1-1 (Ml1)-1 (β,ψ)Nl1 (β,ψ)P1 ||∞ ||P2-1 (Ml2)-1 (α,[varphi])Nl2 (α,[varphi])P2 ||∞ } , where [figure omitted; refer to PDF]
Proof.
Define the iteration matrix in Algorithm 7 [figure omitted; refer to PDF] where [figure omitted; refer to PDF] Obviously, similar to Theorem 11 we only need to find a constant σ with 0...4;σ<1 , which is independent of k , such that K=||P1-1HGUSAOR (α,β,[varphi],ψ,ω)kP1 ||∞ ...4;σ . From the proof of Theorem 11, we can get [figure omitted; refer to PDF] Now let σ denote the last item in above inequalities. Clearly, if 0<βi <2/(1+ρ) , 0<γi <2/(1+ρ) , 0<ω<2/(1+θ) , then [figure omitted; refer to PDF]
Using the proving process of Theorems 11 and 12 and [13, Theorem 2.8] we have the following convergence result about Algorithm 9.
Theorem 13.
Let A∈Rn×n be an H -matrix and (I-Ll ,Ul ,El ) , l=1,2,...,s , a multisplitting of A . Assume that for l=1,2,...,s , we have the following.
(1) Ll are the strictly lower triangular matrices and Ul are the matrices such that the equalities A=I-Ll -Ul hold.
(2) Y9;AYA;=|I|-|Ll |-|Ul |=|I|-|B| , where |B|=|Ll |+|Ul | .
(3) P is diagonal matrix defined in Lemma 3 and Ml1 (β,ψ) , Nl1 (β,ψ) , Ml2 (α,[varphi]) and Nl2 (α,[varphi]) in Theorem 11.
(4) The index sequence {Pi } is admissible and regulated.
Then the sequence {x(k) } generated by Algorithm 9 converges for any initial x(0) if and only if (α,β,βi ,γi ,ω) , i=1,2,...n∈W3 , with ρ=(|B|) and θ=max...1...4;l...4;s {||P1-1 (Ml1)-1 (β,ψ)Nl1 (β,ψ)P1 ||∞ ||P2-1 (Ml2)-1 (α,[varphi])Nl2 (α,[varphi])P2 ||∞ } , where [figure omitted; refer to PDF]
Remark 14.
It is known that an M -matrix or a symmetric positive definite L -matrix is also H -matrix. Therefore, the convergence results in Theorems 11, 12, and 13 are valid.
Remark 15.
Since a strictly or an irreducible diagonally dominant matrix is also satisfying the condition of Theorems 11, 12, and 13, our methods are also valid for these kinds of matrices. Furthermore, for strictly or irreducible diagonally dominant matrices, ||B||∞ can take the place of ρ in Theorems 11, 12, and 13.
4. Applied Convergence of Relaxed Chaotic Methods
In Theorems 12 and 13 of this paper, we find that θ is difficult to compute when carrying out numerical experiments. So for irreducible diagonally dominant matrices, we also have the following applied results of convergence according to Algorithms 7 and 9, respectively.
Theorem 16.
Let A∈Rn×n be an irreducible diagonally dominant matrix and (I-Ll ,Ul ,El ) , l=1,2,...,s , a multisplitting of A . Assume that for l=1,2,...,s , we have the following.
(1) Ll are the strictly lower triangular matrices and Ul are the matrices such that the equalities A=I-Ll -Ul hold.
(2) Y9;AYA;=|I|-|Ll |-|Ul |=|I|-|B| , where |B|=|Ll |+|Ul | .
(3) P is diagonal matrix defined in Lemma 3 and Ml1 (β,ψ) , Nl1 (β,ψ) , Ml2 (α,[varphi]) , and Nl2 (α,[varphi]) in Theorem 11.
Then the sequence {x(k) } generated by Algorithm 7 converges for any initial x(0) if and only if (α,β,βi ,γi ,ω) , i=1,2,...,n∈W4 , with θ[variant prime] =max...1...4;l...4;n {(|1-γi |+γi||B||∞ )}max...1...4;l...4;n {(|1-βi |+βi||B||∞ )} , where [figure omitted; refer to PDF]
Proof.
We only prove |(HGUSAOR)k |x<θ[variant prime][variant prime] x (k=1,2,...,0...4;θ[variant prime] ...4;1 ). Let us define the iteration matrix in Algorithm 7 [figure omitted; refer to PDF] From the proving process of Theorem 11, we may define [figure omitted; refer to PDF] At first, we need to prove ρ(T1 )<1 . Similarly, we may also prove ρ(T2 )<1 . Assume that [figure omitted; refer to PDF] With (36), we know that T1 is nonnegative, so according to [24, Theorem 2.7], there exists an eigenvector x...5;0 , x...0;0 , such that T1 x=ρ(T1 )x hold; that is, [figure omitted; refer to PDF] Multiplying by ψ-1 , it holds that [figure omitted; refer to PDF] As [1-β+βρ(T1 )]|Ul |+|Ll |...5;0 , it follows by [5, Theorem 11] that [figure omitted; refer to PDF] From the proof of [3, Theorem 3.1], we have ρ(T1 )<1 , 0...4;1-β+βρ(T1 )<1 . Since the matrices A , B , and [1-β+βρ(T1 )]|Ul |+|Ll | are irreducible, by [24, Theorem 2.1], it follows that [figure omitted; refer to PDF] With (40) and (41) and M being irreducible diagonally dominant by rows, we have [figure omitted; refer to PDF] Notice that if γi ...4;1 , we may obtain [figure omitted; refer to PDF] While if 1<γi <2/(1+||B||∞ ) , we also have [figure omitted; refer to PDF] From (42), (43), and (44), we can get [figure omitted; refer to PDF] From Theorem 11, we have [figure omitted; refer to PDF] From (35) and the above proof, we have [figure omitted; refer to PDF] where θ[variant prime] =max...1...4;l...4;n {|1-γi |+γi||B||∞ }max...1...4;l...4;n {|1-βi |+βi||B||∞ } , 0<ω<2/(1+θ[variant prime] ) , θ[variant prime][variant prime] =ωθ[variant prime] +|1-ω|<1 .
Remark 17.
Obviously, convergence results of Theorem 16 are convenient for carrying out numerical experiments. Using the proving process of Theorems 11 and 16 and [13, Theorem 2.8], we can get the following result about Algorithm 9.
Theorem 18.
Let A∈Rn×n be an irreducible diagonally dominant matrix and (I-Ll ,Ul ,El ) , l=1,2,...,s , a multisplitting of A . Assume that for l=1,2,...,s , we have the following.
(1) Ll are the strictly lower triangular matrices and Ul are the matrices such that the equalities A=I-Ll -Ul hold.
(2) Y9;AYA;=|I|-|Ll |-|Ul |=|I|-|B| , where |B|=|Ll |+|Ul | .
(3) P is diagonal matrix defined in Lemma 3 and Ml1 (β,ψ) , Nl1 (β,ψ) , Ml2 (α,[varphi]) , and Nl2 (α,[varphi]) in Theorem 11.
(4) The index sequence {Pi } is admissible and regulated.
Then the sequence {x(k) } generated by Algorithm 9 converges for any initial x(0) if and only if (α,β,βi ,γi ,ω) , i=1,2,...,n∈W5 , with θ[variant prime] =max...1...4;l...4;n {(|1-γi |+γi||B||∞ )}max...1...4;l...4;n {(|1-βi |+βi||B||∞ )} , where [figure omitted; refer to PDF]
Remark 19.
As a special case, for the relaxed matrix multisplitting chaotic GSSOR-style methods, we have the corresponding convergence results, where α=1 , β=1 , [varphi]=diag...(β1 ,β2 ,...,βn ) , ψ=diag...(γ1 ,γ2 ,...,γn ) with βi ...5;0 , γi ...5;0 (i=1,2,...,n) and ω real parameters.
5. Numerical Examples
Example 1.
By using difference discretization of partial differential equation, we can obtain the corresponding coefficient matrix of the linear system (n=6) , which is as follows: [figure omitted; refer to PDF]
Now, we will apply the results of Theorem 16 according to Algorithm 7. From Algorithm 7, we can get the iterative matrix [figure omitted; refer to PDF] Here, we assume that s=3 , μl,k ...1;1 . By direct calculations with Matlab 7.1, we have [figure omitted; refer to PDF] In Table 1, we show that convenience results of Section 4 are convenience for carrying out numerical experiments, where ρ(H) denote the spectral radius of iterative matrix H : [figure omitted; refer to PDF]
Table 1: Convergence results of Theorems 16 and 18.
( α , β , β i , γ i , ω ) | 2 1 + || B || ∞ | 2 1 + θ [variant prime] | ρ ( H ) |
( 0.2,0.8,0.6,0.5,1.1 ) | 0.7273 | 1.3793 | 0.5711 |
( 0.3,0.8,0.7,0.5,1.1 ) | 0.7273 | 1.3115 | 0.6133 |
( 0.4,0.7,0.6,0.5,0.8 ) | 0.7273 | 1.3793 | 0.8365 |
( 0.6,0.5,0.4,0.6,0.9 ) | 0.7273 | 1.3793 | 0.69393 |
( 0.8,0.2,0.3,0.7,1.1 ) | 0.7273 | 1.3115 | 0.6053 |
( 0.9,0.1,0.25,0.65,1.1 ) | 0.7273 | 1.3445 | 0.7154 |
Remark 20.
Obviously, θ[variant prime] of Theorems 16 and 18 is applied and easily calculated when carrying out numerical experiments.
Example 2.
Consider a matrix A∈Rn×n of the form [figure omitted; refer to PDF] where ρ(|D|-1 |B|)=ν , ν is a real number, satisfying |ν|<1 .
In Theorem 11, let 0...4;α , β...4;1 , 0<βi , γi <2/(1+ν) , i=1,2,...,n ; then Algorithm 5 converges for any initial vector x(0) . In Theorems 12 and 13, let 0...4;α , β...4;1 , 0<βi , γi <2/(1+ν) , 0<ω<2/(1+θ) , i=1,2,...,n ; then Algorithms 7 and 9 converge for any initial vector x(0) . In Theorems 16 and 18, let 0...4;α , β...4;1 , 0<βi , γi <2/(1+ν) , 0<ω<2/(1+θ[variant prime] ) , i=1,2,...,n ; then Algorithms 7 and 9 converge for any initial vector x(0) , where θ[variant prime] =max...1...4;l...4;n {(|1-γi |+νγi )}max...1...4;l...4;n {(|1-βi |+νβi )} .
If we choose ν=1/5 , in Theorem 11, let 0...4;α , β...4;1 , 0<βi , γi <5/3 , i=1,2,...,n ; then Algorithm 5 converges for any initial vector x(0) . In Theorems 12 and 13, let 0...4;α , β...4;1 , 0<βi , γi <5/3 , 0<ω<2/(1+θ) , i=1,2,...,n ; then Algorithms 7 and 9 converge for any initial vector x(0) . In Theorems 16 and 18, let 0...4;α , β...4;1 , 0<βi , γi <5/3 , 0<ω<2/(1+θ[variant prime] ) , i=1,2,...,n ; then Algorithms 7 and 9 converge for any initial vector x(0) , where θ[variant prime] =max...1...4;l...4;n {(|1-γi |+(1/5)γi )}max...1...4;l...4;n {(|1-βi |+(1/5)βi )} .
6. Conclusions
In this paper, we consider relaxed matrix parallel multisplitting chaotic GUSAOR-style methods for solving linear systems of algebraic equations Ax=b , in which the coefficient A is an H -matrix or an irreducible diagonally dominant matrices, and analyze the convergence of our methods, which use more relaxed factors and are the generalization of [11, 13, 14]. The parameters can be adjusted suitably so that the convergence property of method can be substantially improved. Furthermore, we further study some applied convergence results of methods to be convenient for carrying out numerical experiments. Finally, we give some applied examples, which show that our convergence results are applied and easily calculated when carrying out numerical experiments.
Particularly, one may discuss how to choose the set of relaxed parameters in order to really accelerate the convergence of the considered method. Furthermore, The optimal choice of this set of relaxed parameters is valuably studied.
Acknowledgments
This research of this author is supported by NSFC Tianyuan Mathematics Youth Fund (11226337), NS-FC (61203179, 61202098, 61170309, 91130024, 61033009, 61272544, and 11171039), Aeronautical Science Foundation of China (2013ZD55006), Project of Youth Backbone Teachers of Colleges and Universities in Henan Province (2013GGJS-142), ZZIA Innovation team fund (2014TD02), Major project of development foundation of science and technology of CAEP (2012A0202008), Basic and Advanced Technological Research Project of Henan Province (132300410373, 142300410333), and Natural Science Foundation of Henan Province (14B110023).
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
[1] D. P. O'Leary, R. E. White, "Multisplittings of matrices and parallel solution of linear systems," SIAM Journal on Algebraic and Discrete Methods , vol. 6, no. 4, pp. 630-640, 1985.
[2] A. Frommer, G. Mayer, "Convergence of relaxed parallel multisplitting methods," Linear Algebra and Its Applications , vol. 119, pp. 141-152, 1989.
[3] J. Mas, V. Migallón, J. Penadés, D. B. Szyld, "Nonstationary parallel relaxed multisplitting methods," Linear Algebra and Its Applications , vol. 241-243, pp. 733-747, 1992.
[4] T. X. Gu, X. P. Liu, L. J. Shen, "Relaxed parallel two-stage multisplitting methods," International Journal of Computer Mathematics , vol. 75, no. 3, pp. 351-367, 2000.
[5] T.-X. Gu, X.-P. Liu, "Parallel two-stage multisplitting iterative methods," International Journal of Computer Mathematics , vol. 20, no. 2, pp. 153-166, 1998.
[6] D. Chazan, W. Miranker, "Chaotic relaxation," Linear Algebra and its Applications , vol. 2, pp. 199-222, 1969.
[7] R. Bru, L. Elsner, M. Neumann, "Models of parallel chaotic iteration methods," Linear Algebra and its Applications , vol. 103, pp. 175-192, 1988.
[8] L. Elsner, M. Neumann, B. Vemmer, "The effect of the number of processors on the convergence of the parallel block Jacobi method," Linear Algebra and Its Applications , vol. 154-156, pp. 311-330, 1991.
[9] D. Jiang, Z. Xu, Z. Chen, Y. Han, H. Xu, "Joint time-frequency sparse estimation of large-scale network traffic," Computer Networks , vol. 55, no. 15, pp. 3533-3547, 2011.
[10] D.-D. Jiang, Z.-Z. Xu, H.-W. Xu, Y. Han, Z.-H. Chen, Z. Yuan, "An approximation method of origin-destination flow traffic from link load counts," Computers & Electrical Engineering , vol. 37, no. 6, pp. 1106-1121, 2011.
[11] P. E. Kloeden, D. J. Yuan, "Convergence of relaxed chaotic parallel iterative methods," Bulletin of the Australian Mathematical Society , vol. 50, no. 1, pp. 167-176, 1994.
[12] S.-Q. Shen, T.-Z. Huang, "New comparison results for parallel multisplitting iterative methods," Applied Mathematics and Computation , vol. 206, no. 2, pp. 738-747, 2008.
[13] Y. Song, D. Yuan, "On the convergence of relaxed parallel chaotic iterations for H -matrix," International Journal of Computer Mathematics , vol. 52, no. 3-4, pp. 195-209, 1994.
[14] D. Yuan, "On the convergence of parallel multisplitting asynchronous GAOR method for H -matrix," Applied Mathematics and Computation , vol. 160, no. 2, pp. 477-485, 2005.
[15] L. T. Zhang, T. Z. Huang, T. X. Gu, X. L. Guo, "Convergence of relaxed multisplitting USAOR methods for H -matrices linear systems," Applied Mathematics and Computation , vol. 202, no. 1, pp. 121-132, 2008.
[16] L. Zhang, T. Huang, T. Gu, X. Guo, J. Yue, "Convergent improvement of SSOR multisplitting method for an H -matrix," Journal of Computational and Applied Mathematics , vol. 225, no. 2, pp. 393-397, 2009.
[17] L.-T. Zhang, T.-Z. Huang, S.-H. Cheng, T.-X. Gu, Y.-P. Wang, "A note on parallel multisplitting TOR method for H -matrices," International Journal of Computer Mathematics , vol. 88, no. 3, pp. 501-507, 2011.
[18] L. T. Zhang, T. Z. Huang, S. H. Cheng, T. X. Gu, "The weaker convergence of non-stationary matrix multisplitting methods for almost linear systems," Taiwanese Journal of Mathematics , vol. 15, no. 4, pp. 1423-1436, 2011.
[19] L.-T. Zhang, J.-L. Li, "The weaker convergence of modulus-based synchronous multisplitting multi-parameters methods for linear complementarity problems," Computers & Mathematics with Applications , vol. 67, no. 10, pp. 1954-1959, 2014.
[20] L.-T. Zhang, X.-Y. Zuo, "Improved convergence theorems of multisplitting methods for the linear complementarity problem," Applied Mathematics and Computation . In press
[21] L.-T. Zhang, "A new preconditioner for generalized saddle matrices with highly singular(1,1) blocks," International Journal of Computer Mathematics , 2013.
[22] Y. Zhang, "The USAOR iterative method for linear systems," Numerical Mathematics , vol. 9, no. 4, pp. 354-365, 1987.
[23] Y. Song, "On the convergence of the generalized AOR method," Linear Algebra and Its Applications , vol. 256, pp. 199-218, 1997.
[24] R. S. Varga Matrix Iterative Analysis , Prentice Hall, Englewood Cliffs, NJ, USA, 1962.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2014 Li-Tao Zhang et al. Li-Tao Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Based on the methods presented by Song and Yuan (1994), we construct relaxed matrix parallel multisplitting chaotic generalized USAOR-style methods by introducing more relaxed parameters and analyze the convergence of our methods when coefficient matrices are H-matrices or irreducible diagonally dominant matrices. The parameters can be adjusted suitably so that the convergence property of methods can be substantially improved. Furthermore, we further study some applied convergence results of methods to be convenient for carrying out numerical experiments. Finally, we give some numerical examples, which show that our convergence results are applied and easily carried out.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer