(ProQuest: ... denotes non-US-ASCII text omitted.)
Recommended by Ravshan Ashurov
School of Mathematical Sciences, Universitiy Sains Malaysia, 11800 USM, Pulau Pinang, Malaysia
Received 17 May 2012; Accepted 12 July 2012
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Consider the finite difference discretisation schemes for solving the following boundary value problem which is the two-dimensional Poisson equation with Dirichlet boundary conditions: [figure omitted; refer to PDF] Here, Ω is a continuous unit square solution domain with boundary ∂ ohm . This equation plays a very important role in the modelers of fluid flow phenomena and heat conduction problems. Let Ω be discretised uniformly in both x and y directions with a mesh size h =1 /n , where n is an integer. The simplest finite difference approximation of the Laplacian is [figure omitted; refer to PDF] Here, u ij =u ( x i , y j ) . Another approximation to ( 1.1) can be derived from the rotated five-point finite-difference approximation to give [ 1] [figure omitted; refer to PDF] Based on the latter approximation, improved point and group iterative schemes have been developed over the last few years in solving several types of partial differential equations [ 1- 5]. In particular, the Modified Explicit Decoupled Group (MEDG) method [ 6, 7] was formulated as the latest addition to this family of four-point explicit group methods in solving the Poisson equation. This method has been shown to be the fastest among the existing explicit group methods due to its lesser computational complexity.
Since it is well known that preconditioners play a vital role in accelerating the convergence rates of iterative methods, several preconditioned strategies have been used for improving the convergence rate of the explicit group methods derived from the standard and skewed (rotated) finite difference operators [ 8- 11]. In particular, Saeed and Ali [ 12- 14] presented an (I + S ¯ ) -type preconditioning matrix applied to the systems obtained from the four-point Explicit Decoupled Group (EDG) and the Modified Explicit Decoupled Group (MEDG) methods for solving the elliptic partial differential equation, where S - is obtained by taking the first upper diagonal groups of the iteration matrix of the original system. The numerical experiments performed on these methods were seen to yield very encouraging results. However, no detailed studies of the spectral radius analysis of all these preconditioned systems have been done to confirm the superiority of this preconditioner.
The focus of this study is to establish the convergence properties of the preconditioned systems based on the splitting-type preconditioner (I + S ¯ ) for improving the performance and reliability of this family of explicit group methods derived from the rotated finite-difference formula. We will prove that this type of preconditioner applied to the MEDG SOR can minimize the most the spectral radius of the preconditioned matrix provided the relaxation parameter is in a certain optimum range. This paper is organised as follows: in Section 2, we give a presentation of the preconditioner applied to the system resulted from the EDG SOR method. A brief description of the application of the preconditioner in block formulation to the MEDG SOR is given in Section 3. The theoretical convergence analysis of these methods is discussed in Section 4. In Section 5, we give a numerical example to confirm the results obtained in Section 4. Finally, we report a brief conclusion in Section 6.
2. Preconditioned Explicit Decoupled Group SOR (EDG SOR)
For convenience, we will now briefly explain some of the definitions used in this paper.
Definition 2.1 (see [ 15]).
A matrix A of order n has property A if there exists two disjoint subsets S and T of W = {1,2 , ... ,n } such that if i ...0;j and if either a ij ...0;0 and a ij ...0;0 , then i ∈S and j ∈T or else i ∈T and j ∈S .
Definition 2.2 (see [ 3]).
An ordered grouping π of W = {1,2 , ... ,n } is a subdivision of W into disjoint subsets R 1 , R 2 , ... , R q such that R 1 + R 2 + ... + R q =W . Given a matrix A and an ordered grouping π , we define the submatrices A m ,n for m ,n =1,2 , ...q as follows: A m ,n is formed from A deleting all rows except those corresponding to R m and all columns except those corresponding to R n .
Definition 2.3 (see [ 3]).
Let π be an ordered grouping with q groups. A matrix A has Property A ( π ) if the q ×q matrix Z = ( z r ,s ) defined by z r ,s = {0 if A r ,s =0 or 1 if A r ,s ...0;0 } has Property A .
Definition 2.4 (see [ 15]).
A matrix A of order n is consistently ordered if for some t there exist disjoint subsets S 1 , S 2 , ... , S t of W = {1,2 , ... ,n } such that ∑ k =1 t S k =W and such that if i and j are associated, then j ∈ S k +1 if j >i and j ∈ S k -1 if j <i , where S k is the subset containing i .
Note that a matrix A is a π -consistently ordered matrix if the matrix Z in Definition 2.3is consistently ordered.
From the discretisation of the EDG finite-difference formula in solving the Poisson equation, the linear system [figure omitted; refer to PDF] is obtained with [ 1] [figure omitted; refer to PDF] Let A ∈ ⊄ π ,p n i , n i , (p =N -1 ) be written as A =D -E -F , where D =diag ( A 11 , A 22 , ... , A p ,p ) , E and F are strict block lower triangular, and strict block upper triangular parts of A . Here, the diagonal entries A ii are nonsingular and ⊄ π ,p n i , n i denotes the set of all matrices in ⊄ π ,p n i , n i which is of the form ( 2.1) relative to some given block partitioning π . The block Jacobi iteration matrix is B J = D -1 (E +F ) =L +U , where L = D -1 E , U = D -1 F , the block Gauss-Seidel iteration matrix is B GS = (I -L ) -1 U , and the Block Successive Over-Relaxation method (BSOR) iteration matrix is [figure omitted; refer to PDF] Since the matrix A of ( 2.1) is a π -consistently ordered and possesses property A ( π ) , therefore, the theory of block SOR is valid for this iterative method [ 1].
The theoretical optimum relaxation factor ω p for implementing the group SOR iterative scheme can be computed from the formula: [figure omitted; refer to PDF] where ρ (J ) is the spectral radius of the group Jacobian iterative matrix. Yousif and Evans [ 16] gave a good estimate of the spectral radius for the EDG method: [figure omitted; refer to PDF] In an effort to further accelerate the convergence rates of this method, Saeed and Ali [ 12] applied a preconditioner P to the linear system ( 2.1) and transformed it into an equivalent system: [figure omitted; refer to PDF] with P = (I + S ¯ ) , where I is the identity matrix which has the same dimension as A while S ¯ is obtained by taking the first upper diagonal groups of R 0 in the original system above as the following: [figure omitted; refer to PDF] Here, 0 ~ is a (2 ×2 ) null matrix.
The system ( 2.1) becomes [figure omitted; refer to PDF] Hence, we have the linear system of equations: [figure omitted; refer to PDF] with [figure omitted; refer to PDF] The SOR iteration matrix of this scheme is called the Modified Block Successive Over-Relaxation iteration matrix (MBSOR) and is given by [figure omitted; refer to PDF] The matrix A - of ( 2.9) is π -consistently ordered and possesses property A ( π ) [ 13].
3. Preconditioned-Modified Explicit Decoupled Group SOR (MEDG SOR)
Using the MEDG approximation formula in discretising the Poisson equation, the following system is obtained [ 6]: [figure omitted; refer to PDF] where [figure omitted; refer to PDF] It is observed that the partitioning of A m is in the following block form: [figure omitted; refer to PDF] with p = (N -2 ) , where A m ii ∈ ⊄ π ,p n i , n i , i =1,2 , ... ,p , and ∑ i =1 p n i =n . Let A m = D m - E m - F m , where D m =diag ( A m 11 , A m 22 , . . . , A m pp ) and [figure omitted; refer to PDF] are block matrices consisting of the block diagonal, strict block lower triangular, and strict block upper triangular parts of A m . Here, the diagonal entries A m ii are nonsingular. The block Jacobi iteration matrix is B J ( A m ) = D m -1 ( E m + F m ) = L m + U m , where L m = D m -1 E m , U m = D m -1 F m , while the block Gauss-Seidel iteration matrix is B GS ( A m ) = ( I m - L m ) -1 U m . The Block Successive Over-Relaxation method (BSOR) iteration matrix is, therefore, [figure omitted; refer to PDF] Since the matrix A m of ( 3.3) is π -consistently ordered and possesses property A ( π ) , the theory of block SOR is also valid for this iterative method and, therefore, is convergent [ 6].
Similarly, the theoretical optimum relaxation factor ω p for implementing this group SOR-iterative scheme can be obtained from ( 2.4). In view of the fact that the grid spacing h MEDG =2 h EDG , an estimate of the spectral radius of the group Jacobian iterative matrix of the MEDG method may be obtained from ( 2.5) as [figure omitted; refer to PDF] Good agreement between the theoretical estimates and experimental values of the optimum relaxation parameters was observed in our numerical experiments. Upon applying the left-sided preconditioner P = (I + S ~ ) to system ( 3.1), the following system is obtained [ 13]: [figure omitted; refer to PDF] with [figure omitted; refer to PDF] where 0 ~ is a (2 ×2 ) null matrix.
The preconditioner I + S ~ is of the following form: [figure omitted; refer to PDF] Here, I 0 is a (2 ×2 ) identity matrix and the system ( 3.7) becomes [figure omitted; refer to PDF] Hence, [figure omitted; refer to PDF] where [figure omitted; refer to PDF] The SOR iteration matrix will result in an Improved Modified Block Successive Over-Relaxation iteration matrix (IMBSOR) and is given by [figure omitted; refer to PDF]
4. Convergence Properties of the Preconditioned Group Methods
In this section, we will derive several properties related to the convergence of the preconditioned methods discussed in Sections 2and 3. We will begin with the presentation of several preliminary relevant theorems and lemmas which are needed for the proof of the convergence properties. The spectral radius of a matrix is denoted by ρ ( · ) , which is defined as the largest of the moduli of the eigenvalues of the iteration matrix.
Theorem 4.1 (see [ 15]).
If A =M -N is a regular splitting of the matrix A and A -1 ...5;0 , then [figure omitted; refer to PDF] Thus, an iterative method with coefficient matrix M -1 N is convergent for any initial vector x (0 ) .
An accurate analysis of convergence properties of the SOR method is possible if the matrix A is consistently ordered in the following sense (see [ 17]).
Definition 4.2.
A matrix A is a generalized (q ,r ) -consistently ordered matrix (a GCO (q ,r ) -matrix) if Δ =det ( α q E + α -r F -kD ) is independent of α for all α ...0;0 and for all k . Here, D =diag A and E and F are strictly lower and strictly upper triangular matrices, respectively, such that A =D -E -F .
Definition 4.3 (see [ 17]).
A matrix A of the form ( 3.3) is said to be generally consistently ordered ( π ,q ,r ) or simply GCO ( π ,q ,r ) , where q and r are positive integers, if for the partitioning π of A , the diagonal submatrices A ii , i =1,2 , ... ,p ( ...5;2 ) , are nonsingular, and the eigenvalues of [figure omitted; refer to PDF] are independent of α , for all α ...0;0 , where L and U are strict block lower and upper triangular parts of A respectively.
For any matrix C = ( c ij ) in ⊄ π ,p n i , n i , let | C | denote the block matrix in ⊄ π ,p n i , n i with entries | c i ,j | . Given the matrix [figure omitted; refer to PDF] then μ - denotes the spectral radius of the matrix: [figure omitted; refer to PDF] so that [figure omitted; refer to PDF]
Lemma 4.4 (see [ 17]).
Let | B J | of ( 4.4) be a GCO (q ,r ) -matrix and p =q +r . Then, for any real nonnegative constant α , β , and γ with γ ...0;0 satisfying: α r β q μ - p < γ p , the matrix A [variant prime][variant prime] = γI - α | L | - β | U | is such that A [variant prime][variant prime] -1 ...5;0 .
Lemma 4.5 (see [ 14]).
Suppose A =I -L -U is a GCO ( π ,q ,r ) , where -L and -U are strictly lower and upper triangular matrices, respectively. Let B [cursive l] w be the block iteration matrix of the SOR method given by ( 2.3). If 0 <w <2 , then the block SOR method converges, that is, ρ ( B [cursive l] w ) <1 .
Theorem 4.6 (see [ 14]).
Suppose A =I -L -U is a GCO ( π ,q ,r ) , where -L and -U are strictly lower and upper triangular matrices, respectively. Let B [cursive l] w and B ~ [cursive l] w be the iteration matrices of the SOR method given by ( 2.3) and ( 2.11), respectively. If 0 <w <2 , then
(i) ρ ( B ~ [cursive l] w ) < ρ ( B [cursive l] w ) if ρ ( B [cursive l] w ) <1 ,
(ii) ρ ( B ~ [cursive l] w ) = ρ ( B [cursive l] w ) if ρ ( B [cursive l] w ) =1 ,
(iii): ρ ( B ~ [cursive l] w ) > ρ ( B [cursive l] w ) if ρ ( B [cursive l] w ) >1 .
Using the results and definitions stated above, we can prove the following lemma and theorems in relation to the spectral radius of the iteration matrices of the preconditioned group methods and their unpreconditioned counterparts.
Lemma 4.7.
Suppose A m = I m - L m - U m is a GCO ( π ,q ,r ) , where - L m and - U m are strictly lower and upper triangular matrices, respectively. Let T [cursive l] w be the block iteration matrix of the SOR method given by ( 3.5). If 1 ...4;w <2 , then the block SOR method converges, that is, ρ ( T [cursive l] w ) <1 .
Proof.
Let the matrix A m with partitioning π be given as in ( 3.3) and let the block SOR iteration matrix T [cursive l] w be given as in ( 3.5).
Set [figure omitted; refer to PDF] Clearly, we can see that | T [cursive l] w | < B [cursive l] w [variant prime] and hence we can conclude that ρ ( T [cursive l] w ) ...4; ρ ( B [cursive l] w [variant prime] ) .
Consider the matrix A [variant prime] ∈ ⊄ π ,p n i , n i defined by [figure omitted; refer to PDF] where M - m = I m - | w | | L m | and N - m = | 1 -w | I m + | w | | U m | . It is easily seen that M - m is nonsingular and B [cursive l] w [variant prime] = M - m 1 N - m . Moreover, since M - m 1 ...5;0 and N - m ...5;0 , M - m - N - m is a regular splitting of A [variant prime] (cf.[ 11]). For w satisfying the condition 1 ...4;w <2 , Lemma 4.4implies that A [variant prime] -1 ...5;0 . Therefore, recalling Theorem 4.1above, we have ρ ( B [cursive l] w [variant prime] ) <1 . Hence, ρ ( T [cursive l] w ) <1 , which completes the proof.
The result of Lemma 4.7enables us to prove the following theorem
Theorem 4.8.
Suppose A m = I m - L m - U m is a GCO ( π ,q ,r ) , where - L m and - U m are strictly lower and upper triangular matrices, respectively. Let T [cursive l] w and T ~ [cursive l] w be the iteration matrices of the SOR method given by ( 3.5) and ( 3.13), respectively. If 1 ...4;w <2 , then
(i) ρ ( T ~ [cursive l] w ) < ρ ( T [cursive l] w ) if ρ ( T [cursive l] w ) <1 ,
(ii) ρ ( T ~ [cursive l] w ) = ρ ( T [cursive l] w ) if ρ ( T [cursive l] w ) =1 ,
(iii): ρ ( T ~ [cursive l] w ) > ρ ( T [cursive l] w ) if ρ ( T [cursive l] w ) >1 .
Proof.
From Lemma 4.7and since the matrix A m of ( 3.3) is a GCO ( π ,q ,r ) and T [cursive l] w = (I -w L m ) -1 { (1 -w ) I m +w U m } , there exists a positive vector y such that [figure omitted; refer to PDF] where λ = ρ ( T [cursive l] w ) or equivalently [figure omitted; refer to PDF] Also, since [figure omitted; refer to PDF] we can write [figure omitted; refer to PDF] Rearrange ( 4.11), we can get [figure omitted; refer to PDF] But from ( 4.9), we have [figure omitted; refer to PDF] Therefore, ( 4.12) can be written as [figure omitted; refer to PDF] Hence, for 1 ...4;w <2 and from [ 10], we can get
(i) λ <1 , then T ~ [cursive l] w y - λy <0 and from Theorem 4.6we have ρ ( T ~ [cursive l] w ) < ρ ( T [cursive l] w ) ,
(ii) λ =1 , then T ~ [cursive l] w y = λy and from Theorem 4.6we have ρ ( T ~ [cursive l] w ) = ρ ( T [cursive l] w ) =1 ,
(iii): λ >1 , then T ~ [cursive l] w y - λy >0 and from Theorem 4.6we have ρ ( T ~ [cursive l] w ) > ρ ( T [cursive l] w ) .
Thus, the proof is complete.
Theorem 4.9.
Suppose A =I -L -U and A m = I m - L m - U m are GCO ( π ,q ,r ) , where -L , - L m , -U and - U m are strictly lower and upper triangular matrices of A and A m , respectively. Let B [cursive l] w , B ~ [cursive l] w , T [cursive l] w and T ~ [cursive l] w be the iteration matrices of the SOR method given by ( 2.3), ( 2.11), ( 3.5), and ( 3.13), respectively. If 1 ...4;w <2 , then
(i) ρ ( T ~ [cursive l] w ) < ρ ( T [cursive l] w ) < ρ ( B ~ [cursive l] w ) < ρ ( B [cursive l] w ) if ρ ( B [cursive l] w ) <1 ,
(ii) ρ ( T ~ [cursive l] w ) = ρ ( T [cursive l] w ) = ρ ( B ~ [cursive l] w ) = ρ ( B [cursive l] w ) if ρ ( B [cursive l] w ) =1 ,
(iii): ρ ( T ~ [cursive l] w ) > ρ ( T [cursive l] w ) > ρ ( B ~ [cursive l] w ) > ρ ( B [cursive l] w ) if ρ ( B [cursive l] w ) >1 .
Proof.
In the same manner of the proof of Theorem 4.8and since the matrix A - of ( 2.9) is a GCO ( π ,q ,r ) , see [ 13], and B ~ [cursive l] w = {I -w (L + S - L ) } -1 [ (1 -w )I +w (U - S - + S - U ) ] , there exists a positive vector v such that [figure omitted; refer to PDF] where [figure omitted; refer to PDF] Equation ( 4.15) can be written as [figure omitted; refer to PDF] Also, since T [cursive l] w = ( I m -w L m ) -1 { ( 1 -w ) I m +w U m } , we can write [figure omitted; refer to PDF] But, from ( 4.17) we have [figure omitted; refer to PDF] Thus, from ( 4.19) and since A m of ( 3.3) is a GCO ( π ,q ,r ) matrix, we can get [figure omitted; refer to PDF] Equation ( 4.18) can then be written as [figure omitted; refer to PDF] Hence, we can conclude that, for 1 ...4;w <2 , if
(a) λ - <1 , then T [cursive l] w v - λ - v <0 and from Lemma 4.7we have ρ ( T [cursive l] w ) < ρ ( B ~ [cursive l] w ) ,
(b) λ - =1 , then T [cursive l] w v = λ - v and from Lemma 4.7we have ρ ( T [cursive l] w ) = ρ ( B ~ [cursive l] w ) =1 ,
(c) λ - >1 , then T [cursive l] w v - λ - v >0 and from Lemma 4.7we have ρ ( T [cursive l] w ) > ρ ( B ~ [cursive l] w ) .
In consequence of the above, for 1 ...4;w <2 and from Theorems 4.6and 4.8, we have
(i) ρ ( T ~ [cursive l] w ) < ρ ( T [cursive l] w ) < ρ ( B ~ [cursive l] w ) < ρ ( B [cursive l] w ) if ρ ( B [cursive l] w ) <1 ,
(ii) ρ ( T ~ [cursive l] w ) = ρ ( T [cursive l] w ) = ρ ( B ~ [cursive l] w ) = ρ ( B [cursive l] w ) if ρ ( B [cursive l] w ) =1 ,
(iii): ρ ( T ~ [cursive l] w ) > ρ ( T [cursive l] w ) > ρ ( B ~ [cursive l] w ) > ρ ( B [cursive l] w ) if ρ ( B [cursive l] w ) >1 ,
and the theorem is proved.
In view of Theorem 4.9, the superiority of the preconditioned MEDG SOR over the unpreconditioned MEDG SOR, EDG SOR methods and also preconditioned EDG SOR are confirmed for certain relaxation parameters lying in an optimum range.
5. Numerical Experiments and Discussion of Results
To further confirm the results obtained in Theorems 4.8and 4.9, several experiments were carried out on the following model problem with Dirichlet boundary conditions: [figure omitted; refer to PDF] This problem has an exact solution u ( x ,y ) = e xy with the unit square as the solution domain. The values of u were calculated using different mesh sizes, 34, 86, 118, 186, and 222. The tolerance was set to be [straight epsilon] =5 × 10 -6 . The experimental optimum relaxation parameter w was obtained by running the programs repeatedly and choosing the values which gave the fastest rate of convergence. The computer processing unit was Intel(R) Core(TM) 2Duo with memory of 3Gb and the software used to implement and generate the results was Developer C++ Version 4.9.9.2. Tables 1and 2display the corresponding number of iterations (k ) , optimum execution times (t ) , and the maximum errors (e ) for the unpreconditioned and preconditioned methods of EDG SOR and MEDG SOR, respectively.
Table 1: Comparison of performances for the original EDG SOR and MEDG SOR.
| Unpreconditioned methods | |||||||
N | EDG SOR | MEDG SOR | ||||||
| w | k | t (secs) | e | w | k | t (secs) | e |
34 | 1.753 | 49 | 0 | 4.11E -06 | 1.586 | 21 | 0 | 5.00E -06 |
86 | 1.894 | 122 | 0.047 | 4.88E -06 | 1.812 | 42 | 0.023 | 4.57E -06 |
118 | 1.921 | 166 | 0.078 | 4.89E -06 | 1.862 | 50 | 0.035 | 2.08E -06 |
186 | 1.943 | 233 | 0.187 | 4.79E -06 | 1.914 | 81 | 0.064 | 4.36E -06 |
222 | 1.948 | 256 | 0.327 | 4.78E -06 | 1.931 | 96 | 0.145 | 1.84E -06 |
Table 2: Comparison of performances for the Preconditioned EDG SOR and MEDG SOR.
| Preconditioned methods | |||||||
N | Preconditioned EDG SOR | Preconditioned MEDG SOR | ||||||
| w | k | t (secs) | e | w | k | t (secs) | e |
34 | 1.679 | 41 | 0 | 3.85E -06 | 1.442 | 16 | 0 | 3.47E -06 |
86 | 1.757 | 88 | 0.034 | 3.87E -06 | 1.5882 | 30 | 0.008 | 3.09E -06 |
118 | 1.774 | 106 | 0.051 | 4.47E -06 | 1.642 | 43 | 0.016 | 3.55E -06 |
186 | 1.782 | 151 | 0.133 | 4.68E -06 | 1.684 | 59 | 0.025 | 4.24E -06 |
222 | 1.795 | 168 | 0.198 | 4.15E -06 | 1.671 | 72 | 0.078 | 3.99E -06 |
From the results in Table 1, it is obvious that the original MEDG SOR method is superior to the EDG SOR method in terms of the number of iterations and computing times. The superiority of the preconditioned MEDG SOR over the preconditioned EDG SOR was also depicted in Table 2. The preconditioned EDG SOR was also outperformed by the unpreconditioned MEDG as shown in Figure 1since the spectral radius of the latter is smaller than the former as proven in Theorem 4.9. From the numerical results, it is also apparent that the preconditioned MEDG SOR scheme requires the least computing effort amongst the four methods in terms of number of iterations and execution times due to its smallest spectral radius value amongst the four schemes.
Figure 1: Number of iterations ( k ) for the four methods for different mesh sizes N .
[figure omitted; refer to PDF]
Figure 1shows the number of iterations needed for convergence for the unpreconditioned and preconditioned methods which were shown to be in agreement with the theoretical results obtained in Theorem 4.9.
6. Conclusion
In this paper, we present a theoretical convergence analysis of a specific splitting-type preconditioner in block formulation applied to the linear systems resulted from a class of group iterative schemes specifically the EDG SOR and the MEDG SOR schemes. We have shown that the spectral radius of the iteration matrix of the preconditioned MEDG SOR method is the smallest compared to the unpreconditioned MEDG SOR, EDG SOR, and preconditioned EDG SOR methods provided that the relaxation parameter ω ∈ [1,2 ) . This work confirms the superiority of the preconditioned MEDG SOR method theoretically and experimentally in terms of convergence rates among this class of group iterative methods.
Acknowledgment
The authors acknowledge the Fundamental Research Grant Scheme (203/PMATHS/6711188) for the completion of this article.
[1] A. R. Abdullah, "The four point explicit decoupled group (EDG) method: a fast poisson solver," International Journal of Computer Mathematics , vol. 38, pp. 61-70, 1991.
[2] D. J. Evans, W. S. Yousif, "The implementation of the explicit block iterative methods on the balance 8000 parallel computer," Parallel Computing , vol. 16, no. 1, pp. 81-97, 1990.
[3] M. M. Martins, W. S. Yousif, D. J. Evans, "Explicit group AOR method for solving elliptic partial differential equations," Neural, Parallel & Scientific Computations , vol. 10, no. 4, pp. 411-422, 2002.
[4] M. Othman, A. R. Abdullah, "Efficient four points modified explicit group Poisson solver," International Journal of Computer Mathematics , vol. 76, no. 2, pp. 203-217, 2000.
[5] W. S. Yousif, D. J. Evans, "Explicit group over-relaxation methods for solving elliptic partial differential equations," Mathematics and Computers in Simulation , vol. 28, no. 6, pp. 453-466, 1986.
[6] N. H. M. Ali, K. F. Ng, "Modified explicit decoupled group method in the solution of 2-D elliptic PDEs," in Proceedings of the 12th WSEAS International Conference on Applied Mathematics, pp. 162-167, Cairo, Egypt, December2007.
[7] N. H. M. Ali, K. F. Ng, "A new iterative elliptic PDE solver on a distributed PC cluster," in Proceedings of the 9th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT'08), pp. 47-53, Dunedin, New Zealand, December 2008.
[8] A. D. Gunawardena, S. K. Jain, L. Snyder, "Modified iterative methods for consistent linear systems," Linear Algebra and Its Applications , vol. 154-156, pp. 123-143, 1991.
[9] S. C. Lee Point and group iterative method accelerated techniques for solving the Poisson problem [M.S. thesis] , USM, Penang, Malaysia, 2006.
[10] M. M. Martins, D. J. Evans, W. Yousif, "Further results on the preconditioned SOR method," International Journal of Computer Mathematics , vol. 77, no. 4, pp. 603-610, 2001.
[11] M. Usui, T. Kohno, H. Niki, "On the preconditioned SOR method," International Journal of Computer Mathematics , vol. 59, no. 1, pp. 123-130, 1995.
[12] A. M. Saeed, N. H. M. Ali, "Preconditioned ( I + S ¯ ) group iterative methods on rotated grids," European Journal of Scientific Research , vol. 37, no. 2, pp. 278-287, 2009.
[13] A. M. Saeed, N. H. M. Ali, "Preconditioned modified explicit decoupled group method in the solution of elliptic PDEs," Applied Mathematical Sciences , vol. 4, no. 21-24, pp. 1165-1181, 2010.
[14] A. M. Saeed, N. H. M. Ali, "On the convergence of the preconditioned group rotated iterative methods in the solution of elliptic PDEs," Applied Mathematics & Information Sciences , vol. 5, no. 1, pp. 65-73, 2011.
[15] R. S. Varga Matrix Iterative Analysis , Prentice-Hall, Englewood Cliffs, NJ, USA, 1962.
[16] W. S. Yousif, D. J. Evans, "Explicit de-coupled group iterative methods and their parallel implementation," Parallel Algorithms and Applications , vol. 7, no. 1-2, pp. 53-71, 1995.
[17] Y. G. Saridakis, "Generalized consistent orderings and the accelerated overrelaxation method," BIT Numerical Mathematics , vol. 26, no. 3, pp. 369-376, 1986.
[]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2012 Norhashidah Hj. Mohd Ali and Abdulkafi Mohammed Saeed. Norhashidah Hj. Mohd Ali et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
The construction of a specific splitting-type preconditioner in block formulation applied to a class of group relaxation iterative methods derived from the centred and rotated (skewed) finite difference approximations has been shown to improve the convergence rates of these methods. In this paper, we present some theoretical convergence analysis on this preconditioner specifically applied to the linear systems resulted from these group iterative schemes in solving an elliptic boundary value problem. We will theoretically show the relationship between the spectral radiuses of the iteration matrices of the preconditioned methods which affects the rate of convergence of these methods. We will also show that the spectral radius of the preconditioned matrices is smaller than that of their unpreconditioned counterparts if the relaxation parameter is in a certain optimum range. Numerical experiments will also be presented to confirm the agreement between the theoretical and the experimental results.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer