(ProQuest: ... denotes non-US-ASCII text omitted.)
T. Lotfi 1 and F. Soleymani 2 and Z. Noori 1 and A. Kiliçman 3 and F. Khaksar Haghani 2
Academic Editor:Krzysztof Cieplinski
1, Department of Mathematics, Islamic Azad University, Hamedan Branch, Hamedan, Iran
2, Department of Mathematics, Islamic Azad University, Shahrekord Branch, Shahrekord, Iran
3, Department of Mathematics and Institute for Mathematical Research, Universiti Putra Malaysia, 43400 Serdang, Malaysia
Received 2 June 2014; Accepted 20 July 2014; 3 September 2014
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Preliminaries
The main goal and motivation in constructing iterative methods for solving nonlinear equations is to obtain as high as possible order of convergence with minimal computational cost (see, e.g., [1-3]). Hence, many researchers (see, e.g., [4, 5]) have paid much attention to construct optimal multipoint methods without memory based on the unproved conjecture of Kung and Traub [6], which indicates that any multipoint iterative method without memory using n+1 functional evaluations can reach the optimal order 2n .
Let α be a simple real zero of a real function f:D⊆R[arrow right]R and let x0 be an initial approximation to α . In many practical situations, it is preferable to avoid calculations of derivatives of f . This makes the construction of iterative derivative-free methods important [7].
This paper follows two main goals. First, developing some optimal three-step derivative-free families of methods without memory. And second, developing the proposed families to methods with memory in a way that reaches convergence R-orders 6 and 12 without any new functional evaluation.
The main idea in methods with memory is based on the use of suitable two-valued functions and the variation of a free parameter in each iterative step. This parameter is calculated using information from the current and previous iteration so that the developed methods may be regarded as methods with memory following Traub's classification [8]. A supplemental inspiration for studying methods with memory stands up from an astonishing fact that such classes of iterative methods have been considered in the literature very rarely despite their high efficiency indices.
The paper is organized as follows. First, two families of optimal methods with orders four and eight consuming three and four function evaluations per iteration, respectively, are proposed in Section 2. Then in Section 3, we state methods with memory of very high computational efficiencies. The increase of convergence speed is achieved without additional function evaluations, which is the main advantage of the methods with memory. Section 4 is assigned to numerical results connected to the order of the methods with and without memory. Concluding remarks are given in Section 5.
2. Construction of New Families
This section deals with constructing new multipoint methods for solving nonlinear equations. The discussions are divided into two subsections. Let the scalar function f:D⊂R[arrow right]R and f(α)=0...0;f[variant prime] (α)=c1 ; that is, α is a simple zero of f(x)=0 .
2.1. New Two-Step Methods without Memory
In this subsection, we start with the two-step scheme: [figure omitted; refer to PDF]
Note that we omit the iteration index k for the sake of simplicity only. The order of convergence of scheme (1) is four, but it does not contain any contributions. We substitute derivatives in the first and second steps by suitable approximations that use available data; thus, we introduce an approximation as follows.
Using the points x and w=x+γf(x) (γ is a nonzero real constant), we can apply Lagrange's interpolatory polynomial for approximating f[variant prime] (x) : [figure omitted; refer to PDF] Fix f[variant prime] (x)[approximate]P1[variant prime] (x) , and by setting t=x , we have [figure omitted; refer to PDF]
Also, Lagrange's interpolatory polynomial at points x , w , and y for approximating f[variant prime] (y) can be given as follows: [figure omitted; refer to PDF] We obtain [figure omitted; refer to PDF] Finally, we set above approximations in the denominator of (1), and so our derivative-free two-step iterative method is derived in what follows: [figure omitted; refer to PDF]
Now to check the convergence order of (6), we avoid retyping the widely practiced approach in the literature and put forward the following self-explained Mathematica code:
f [ e _ ] = f 1 a * ( e + ∑ k = 2 4 ... c k * e k ) ;
ew = e + γ f [ e ] ( * e w = w - α * ) ;
f [ x _ , y _ ] [: =] f [ x ] - f [ y ] x - y ;
p 1 [ t _ ] [: =] t - ew e - ew f [ e ] + t - e ew - e f [ ew ] ;
dp 1 [ t _ ] [: =] f [ e ] e - ew + f [ ew ] ew - e ;
ey = e - Series [ f [ e ] / p 1 [variant prime] [ ew ] ,
{ e , 0,3 } ] / / FullSimplify ( * ey = y - α * )
Out [ a ] : ( 1 + f 1 a γ ) c 2 e 2 + O [ e ] 3 .
p 2 [ t _ ] = ( t - ew ) ( t - ey ) ( e - ew ) ( e - ey ) f [ e ]
p 2 [ t _ ] = + ( t - e ) ( t - ey ) ( ew - e ) ( ew - ey ) f [ ew ]
p 2 [ t _ ] = + ( t - ew ) ( t - e ) ( ey - ew ) ( ey - e ) f [ ey ] ;
dp 2 [ t _ ] [: =] 2 t - ew - ey ( e - ew ) ( e - ey ) f [ e ]
dp 2 [ t _ ] [: =] + 2 t - e - ey ( ew - e ) ( ew - ey ) f [ ew ]
dp 2 [ t _ ] [: =] + 2 t - ew - e ( ey - ew ) ( ey - e ) f [ ey ] ;
ez = ey - f [ ey ] dp 2 [ ey ] / / FullSimplify
Out [ b ] : ( 1 + f 1 a γ ) 2 c 2 ( c 2 2 - c 3 ) e 4 + O [ e ] 5 .
Considering Out[b] of the above Mathematica program, we observe that the order of convergence of the family (6) is four, and so we can state a convergence theorem in what follows.
Theorem 1.
If an initial approximation x0 is sufficiently close to a simple zero α of f , then the convergence order of the two-step approach (6) is equal to four. And its error equation is given by [figure omitted; refer to PDF]
2.2. New Three-Step Family
Now, we construct a three-step uniparametric family of methods based on the two-step method (6). We start from a three-step scheme where the first two steps are given by (6), and also the third step is Newton's method; that is, [figure omitted; refer to PDF] The derivative f[variant prime] (z) in the third step of (8) should be substituted by a suitable approximation in order to obtain as high as possible of convergence order and to make the scheme optimal. To provide this approximation, we apply Lagrange's interpolatory polynomial at the points w=x+γf(x) , x , y , and z ; that is, [figure omitted; refer to PDF] It is obvious that P3 (z)=f(z) . By differentiating (9) and setting t=z , we obtain [figure omitted; refer to PDF] By substituting f[variant prime] (z)[approximate]P3[variant prime] (z) in (8), we have [figure omitted; refer to PDF] where P2[variant prime] (y) is defined by (5) and P3[variant prime] (z) given by (10).
In the following theorem, we state suitable conditions for deriving an optimal three-step scheme without memory.
Theorem 2.
Let f:D⊂R[arrow right]R be a scalar function which has the simple root α in the open interval D ; also initial approximation x0 is sufficiently close to a simple zero α of f . The three-step iterative method (11) has eighth order and satisfies the following error equation: [figure omitted; refer to PDF]
Proof.
We are going to employ symbolic computation in the computational software package Mathematica. We write the following: [figure omitted; refer to PDF] Now by introducing the abbreviations e=x-α , ew=w-α , ey=y-α , ez=z-α , and ck =f(k) (α)/(k!f[variant prime] (α)) , we provide the following Mathematica program in order to obtain the convergence order of (11).
Program Written in Mathematica. Consider the following.
f [ e _ ] = f 1 a * ( e + ∑ k = 2 4 ... c k * e k ) ;
ew = e + γ f [ e ] ;
f [ x _ , y _ ] [: =] f [ x ] - f [ y ] x - y ;
p 1 [ t _ ] [: =] t - ew e - ew f [ e ] + t - e ew - e f [ ew ] ;
dp 1 [ t _ ] [: =] f [ e ] e - ew + f [ ew ] ew - e ;
ey = e - Series [ f [ e ] / p 1 [variant prime] [ ew ] ,
{ e , 0,8 } ] / / FullSimplify
Out [ a ] : ( 1 + γ f 1 a ) c 2 e 2 + O [ e ] 3 .
p 2 [ t ] = ( t - ew ) ( t - ey ) ( e - ew ) ( e - ey ) f [ e ]
p 2 [ t ] = + ( t - e ) ( t - ey ) ( ew - e ) ( ew - ey ) f [ ew ]
p 2 [ t _ ] = + ( t - ew ) ( t - e ) ( ey - ew ) ( ey - e ) f [ ey ] ;
dp 2 [ t _ ] [: =] 2 t - ew - ey ( e - ew ) ( e - ey ) f [ e ]
dp 2 [ t _ ] [: =] + 2 t - e - ey ( ew - e ) ( ew - ey ) f [ ew ]
dp 2 [ t _ ] [: =] + 2 t - ew - e ( ey - ew ) ( ey - e ) f [ ey ] ;
ez = ey - f [ ey ] dp 2 [ ey ] / / FullSimplify
Out [ b ] : ( 1 + γ f 1 a ) 2 c 2 ( c 2 2 - c 3 ) e 4 + O [ e ] 5 .
dp 3 [ t _ ]
[: =] ewey + ewez + eyez - 2 ( ew + ey + ez ) t + 3 t 2 ( e - ew ) ( e - ey ) ( e - ez ) f [ e ]
[: =] + eey + eez + eyez - 2 ( e + ey + ez ) t + 3 t 2 ( ew - e ) ( ew - ey ) ( ew - ez ) f [ ew ]
[: =] + eew + eez + ewez - 2 ( e + ew + ez ) t + 3 t 2 ( ey - ew ) ( ey - e ) ( ey - ez ) f [ ey ]
[: =] + eew + eey + ewey - 2 ( e + ew + ey ) t + 3 t 2 ( ez - ew ) ( ez - e ) ( ez - ey ) f [ ez ] ;
ex = ez - f [ ez ] dp 3 [ ez ] / / FullSimplify
Out [ c ] : ( 1 + γ f 1 a ) 4 c 2 2 ( c 2 2 - c 3 ) ( c 2 3 - c 2 c 3 + c 4 ) e 8
+ O [ e ] 9 .
As a result, the proof of the theorem is finished. According to Out[c], it possesses eighth order.
Error equations (7) and (12) indicate that the orders of methods (6) and (11) are four and eight, respectively.
In the next section, we will modify the proposed methods and introduce new methods with memory. With the use of accelerator parameters, the order of convergence will significantly increase.
3. Extension to Methods with Memory
This section is concerned with extraction of efficient methods with memory from (6) and (11), through a careful inspection of their error equations containing the parameter γ , which can be approximated in such a way that increases the local order of convergence.
Toward this goal, we set γ=γk as the iteration proceeds by the formula γk =-1/f[variant prime] ¯(α) for k=1,2,... , where f[variant prime] ¯(α) is an approximation of f[variant prime] (α) . We therefore use the approximation [figure omitted; refer to PDF] for (6) and the following one [figure omitted; refer to PDF] for (11). Here, we consider Newton's interpolating polynomial of third degree as the method for approximating f[variant prime] (α) in two-step method (6) and Newton's interpolating polynomial of fourth degree for approximating f[variant prime] (α) in the three-step method (11), where N3 (t) is the Newton's interpolation polynomial of third degree, set through four available approximations xk ,xk-1 ,yk-1 ,wk-1 and N4 (t) is the Newton's interpolation polynomial of fourth degree, set through five available approximations xk ,zk-1 ,yk-1 ,wk-1 ,xk-1 . Consider [figure omitted; refer to PDF]
Note that a divided difference of order m , defined recursively as [figure omitted; refer to PDF] has been used throughout this paper. Hence, the with memory developments of (6) and (11) can be presented as follows: [figure omitted; refer to PDF]
Remark 3.
Accelerating methods obtained by recursively calculated free parameter may also be called self-accelerating methods. The initial value γ0 should be chosen before starting the iterative process, for example, using one of the ways proposed in [8].
Here, we attempt to prove that the methods with memory (19) and (20) have convergence orders six and twelve provided that we use accelerator γk as in (14) and (15). For ease of continuing analysis, we introduce the convenient notation as follows. If the sequence {xk } converges to the zero α of f with the order p , then we write ek+1 ~ekp , where ek =xk -α . The following lemma will play a crucial role in improving the convergence order of the methods with memory to be proposed in this paper.
Lemma 4.
If γk =-1/N3[variant prime] (xk ) , ek =xk -α,ek,y =yk -α , and ek,w =wk -α , then the following relation holds: [figure omitted; refer to PDF]
Proof.
Following the same terminology as in Theorem 2 and the symbolic software Mathematica, it would be easy to obtain (21) via writing the following code:
ClearAll [ " Global ' * " ]
A [ t _ ] [: =] InterpolatingPolynomial [ { { e , fx } ,
{ ew , fw } , { ey , fy } , { e 1 , fx 1 } } , t ] / / Simplify
Approximation = - 1 / A [variant prime] [ e 1 ] / / Simplify ;
fx = f 1 a * ( e + c 2 * e ^ 2 + c 3 * e ^ 3 + c 4 * e ^ 4 ) ;
fw = f 1 a * ( ew + c 2 * ew ^ 2 + c 3 * ew ^ 3
fw = f 1 a * iii + c 4 * ew ^ 4 ) ;
fy = f 1 a * ( ey + c 2 * ey ^ 2 + c 3 * ey ^ 3
fy = f 1 aiii * + c 4 * ey ^ 4 ) ;
fx 1 = f 1 a * ( e 1 + c 2 * e 1 ^ 2 + c 3 * e 1 ^ 3
fx 1 = f 1 aiii * + c 4 * e 1 ^ 4 ) ;
b = Series [ Approximation , { e , 0,2 } , { ew , 0,2 } ,
{ ey , 0,2 } , { e 1,0 , 2 } ] / / Simplify ;
Collect [ Series [ 1 + b * f 1 a , { e , 0,1 } , { ew , 0,1 } ,
{ ey , 0,1 } , { e 1,0 , 0 } ] ,
{ e , ew , ey , e 1 } , Simplify ]
The proof is complete.
In order to obtain the R-order of convergence [9] of the method with memory (19), we establish the following theorem.
Theorem 5.
If an initial approximation x0 is sufficiently close to the zero α of f(x) and the parameter γk in the iterative scheme (19) is recursively calculated by the forms given in (14), then the R-order of convergence for (19) is at least six.
Proof.
Let {xk } be a sequence of approximations generated by the iterative method with memory (19). If this sequence converges to the zero α of f with the order p , then we write [figure omitted; refer to PDF] Thus [figure omitted; refer to PDF] Moreover, assume that the iterative sequences wk and yk have the orders p1 and p2 , respectively. Then, (22) gives [figure omitted; refer to PDF] Since [figure omitted; refer to PDF] using Lemma 4 and (27), induce [figure omitted; refer to PDF] Matching the powers of ek-1 on the right hand sides of (24)-(29), (25)-(30), and (23)-(31), one can obtain [figure omitted; refer to PDF] The nontrivial solution of this system is p1 =2 , p2 =3 , and p=6 . This completes the proof.
Using symbolic computations and Taylor expansions, it is easy to derive the following lemma.
Lemma 6.
Assuming (15) and (17), we have [figure omitted; refer to PDF] where γk =-1/N4[variant prime] (xk ) , ek =xk -α,ek,z =zk -α , ek,y =yk -α , and ek,w =wk -α .
Proof.
The proof of this lemma is similar to Lemma 4. It is hence omitted.
Similarly for the three-step method with memory (20), we have the following theorem.
Theorem 7.
If an initial approximation x0 is sufficiently close to the zero α of f(x) and the parameter γk in the iterative scheme (20) is recursively calculated by the forms given in (15), then the order of convergence for (20) is at least twelve.
Proof.
Let {xk } be a sequence of approximations generated by the iterative method with memory (20). If this sequence converges to the zero α of f with the order p , then we write [figure omitted; refer to PDF] So [figure omitted; refer to PDF] Moreover, assume that the iterative sequences wk , yk , and zk have the orders p1 , p2 , and p3 , respectively. Then, (34) gives [figure omitted; refer to PDF] Since [figure omitted; refer to PDF] by Lemma 6 and (40), we obtain [figure omitted; refer to PDF] Matching the powers of ek-1 on the right hand sides of (36)-(43), (37)-(44), (38)-(45), and (35)-(46), one can obtain [figure omitted; refer to PDF] This system has the solutions p1 =2 , p2 =3 , p3 =6 , and p=12 . The proof is complete.
Remark 8.
The advantage of the proposed methods is in their higher computational efficiency indices. We emphasize that the increase of the R-order of convergence has been obtained without any additional function evaluations, which points to very high computational efficiency. Indeed, the efficiency index 121/4 [approximate]1.861 of the proposed three-step twelfth-order method with memory is higher than the efficiency index, 61/3 [approximate]1.817 of (19), 81/4 [approximate]1.682 of the optimal three-point method (11), and 41/3 [approximate]1.587 of (6).
Remark 9.
We observe that the methods (19) and (20) with memory are considerably accelerated (up to 50%) in contrast to the corresponding method (11) without memory.
4. Numerical Experiments
In this section, we test our proposed methods and compare their results with some other methods of the same order of convergence. The results are reported using the programming package Mathematica 8 in multiple precision arithmetic environment. We have considered 1000 digits floating point arithmetic so as to minimize the round-off errors as much as possible. The errors |xk -α| denote approximations to the sought zeros, and a(-b) stands for a×10-b . Moreover, coc indicates the computational order of convergence and is computed by [figure omitted; refer to PDF]
It is assumed that the initial estimate γ0 should be chosen before starting the iterative process, and also x0 is given suitably.
Several iterative methods of optimal orders four and eight, for comparing with our proposed methods, have been chosen as comes next.
Derivative-free Kung-Traub's two-step method (KT4) [6] is as follows: [figure omitted; refer to PDF]
Two-step method by Zheng et al. (ZLH4) [10] is as follows: [figure omitted; refer to PDF]
Two-step method by Lotfi and Tavakoli (LT4) [11] is as follows: [figure omitted; refer to PDF] where H(tk ,uk )=1+tk .
Derivative-free Kung-Traub's three-step method (KT8) [6] is as follows: [figure omitted; refer to PDF]
Three-step methods developed by Zheng et al. (ZLH8) [10] is as follows: [figure omitted; refer to PDF]
Three-step method by Lotfi and Tavakoli (LT8) [11] is as follows: [figure omitted; refer to PDF] where W(sk ,vk )=1+sk2 +vk2 , G(tk ,sk )=1+tk +sk +2tksk +(-1-[varphi]k )tk3 and ([varphi]k =1/(1+γk f[xk ,wk ])) , and H(tk ,uk )=1+tk are the weight functions.
In Tables 1, 2, and 3 our two-step proposed classes (6) and (19) have been compared with optimal two-step methods KT4, ZLH4, and LT4. We observe that all these methods behave very well practically and confirm their theoretical results.
Table 1: f 1 ( x ) = sin ... ... ( π x ) e ( x 2 + x cos ... ... ( x ) - 1 ) + x log ... ... ( x sin ... ... ( x ) + 1 ) , α = 0 , and x0 =0.6 .
Methods | | x 1 - α | | | x 2 - α | | | x 3 - α | | coc |
Without memory |
|
|
|
|
New method (6), γ=-1 | 0.6742 ( - 2 ) | 0.1587 ( - 10 ) | 0.4307 ( - 45 ) | 4.0049 |
KT4, γ=-0.01 | 0.6012 ( - 1 ) | 0.1486 ( - 4 ) | 0.6094 ( - 19 ) | 4.0171 |
ZLH4, γ=-0.01 | 0.5882 ( - 1 ) | 0.1320 ( - 5 ) | 0.8291 ( - 24 ) | 3.9365 |
LT, γ=-0.01 | 0.6178 ( - 1 ) | 0.3334 ( - 4 ) | 0.2739 ( - 17 ) | 4.0368 |
With memory |
|
|
|
|
New method (19), γ=-1 | 0.6742 ( - 2 ) | 0.3177 ( - 11 ) | 0.8372 ( - 62 ) | 5.4213 |
KT4, γ=-1 | 0.2102 ( - 1 ) | 0.9935 ( - 9 ) | 0.2326 ( - 48 ) | 5.4031 |
ZLH4, γ=-0.01 | 0.5882 ( - 1 ) | 0.2204 ( - 8 ) | 0.1503 ( - 45 ) | 5.0216 |
LT4, γ=-1 | 0.1752 ( - 1 ) | 0.4934 ( - 9 ) | 0.5220 ( - 50 ) | 5.4214 |
Table 2: f 2 ( x ) = e - 5 x ( x - 2 ) ( x 10 + x + 2 ) , α = 2 , and x0 =2.2 .
Methods | | x 1 - α | | | x 2 - α | | | x 3 - α | | coc |
Without memory |
|
|
|
|
New method (6), γ=-0.01 | 0.9445 ( - 3 ) | 0.1550 ( - 13 ) | 0.1289 ( - 56 ) | 3.9945 |
KT4, γ=-0.01 | 0.1125 ( - 2 ) | 0.3033 ( - 13 ) | 0.1891 ( - 55 ) | 3.9932 |
ZLH4, γ=-0.1 | 0.9349 ( - 3 ) | 0.1478 ( - 13 ) | 0.1074 ( - 56 ) | 3.9939 |
LT, γ=-0.01 | 0.1343 ( - 2 ) | 0.5965 ( - 13 ) | 0.2821 ( - 54 ) | 3.9918 |
With memory |
|
|
|
|
New method (19), γ=1 | 0.1057 ( - 2 ) | 0.1102 ( - 19 ) | 0.8335 ( - 109 ) | 5.2480 |
KT4, γ=1 | 0.1267 ( - 2 ) | 0.2180 ( - 19 ) | 0.3580 ( - 107 ) | 5.2364 |
ZLH4, γ=-0.01 | 0.9445 ( - 3 ) | 0.2868 ( - 20 ) | 0.8014 ( - 112 ) | 5.2264 |
LT4, γ=-1 | 0.1343 ( - 2 ) | 0.1072 ( - 19 ) | 0.1150 ( - 108 ) | 5.2035 |
Table 3: f 3 ( x ) = e x 3 - x - cos ... ... ( x 2 - 1 ) + x 3 + 1 , α = - 1 , and x0 =-1.65 .
Methods | | x 1 - α | | | x 2 - α | | | x 3 - α | | coc |
Without memory |
|
|
|
|
New method (6), γ=-0.01 | 0.6849 ( - 3 ) | 0.8725 ( - 13 ) | 0.2284 ( - 52 ) | 4.0003 |
KT4, γ=-0.01 | 0.6610 ( - 3 ) | 0.8682 ( - 13 ) | 0.2564 ( - 52 ) | 4.0004 |
ZLH4, γ=0.1 | 0.1214 ( - 2 ) | 0.2388 ( - 12 ) | 0.3560 ( - 51 ) | 4.0004 |
LT4, γ=-0.01 | 0.6398 ( - 3 ) | 0.8549 ( - 13 ) | 0.2714 ( - 52 ) | 4.0002 |
With memory |
|
|
|
|
New method (19), γ=-1 | 0.4838 ( - 2 ) | 0.1348 ( - 11 ) | 0.1230 ( - 64 ) | 5.5506 |
KT4, γ=1 | 0.1170 ( - 1 ) | 0.3517 ( - 7 ) | 0.8936 ( - 41 ) | 6.0860 |
ZLH4, γ=-0.01 | 0.6849 ( - 3 ) | 0.7630 ( - 17 ) | 0.1662 ( - 92 ) | 5.4226 |
LT4, γ=-0.01 | 0.6398 ( - 3 ) | 0.6666 ( - 17 ) | 0.7373 ( - 93 ) | 5.4324 |
Also Tables 4, 5, and 6 present numerical results for our three-step classes (11) and (20) and methods KT8, ZLH8, and LT8. It is also clear that all these methods behave very well practically and confirm their relevant theories.
Table 4: f 1 ( x ) = sin ... ... ( π x ) e ( x 2 + x cos ... ... ( x ) - 1 ) + x log ... ... ( x sin ... ... ( x ) + 1 ) , α = 0 , and x0 =0.6 .
Methods | | x 1 - α | | | x 2 - α | | | x 3 - α | | coc |
Without memory |
|
|
|
|
New method (11), γ=-1 | 0.2345 ( - 3 ) | 0.1042 ( - 32 ) | 0.1593 ( - 267 ) | 7.9999 |
KT8, γ=-1 | 0.3101 ( - 3 ) | 0.2671 ( - 31 ) | 0.8196 ( - 256 ) | 7.9999 |
ZLH8, γ=-0.01 | 0.1742 ( - 2 ) | 0.1558 ( - 22 ) | 0.6471 ( - 183 ) | 7.9999 |
LT8, γ=-1 | 0.5758 ( - 3 ) | 0.7106 ( - 29 ) | 0.3880 ( - 236 ) | 7.9998 |
With memory |
|
|
|
|
New method (20), γ=-0.1 | 0.3440 ( - 3 ) | 0.1220 ( - 41 ) | 0.8636 ( - 503 ) | 11.9935 |
KT8, γ=-0.01 | 0.9773 ( - 3 ) | 0.2960 ( - 33 ) | 0.1505 ( - 399 ) | 12.0021 |
ZLH8, γ=-0.01 | 0.1742 ( - 2 ) | 0.3012 ( - 33 ) | 0.4438 ( - 402 ) | 11.9900 |
LT8, γ=-0.1 | 0.7107 ( - 4 ) | 0.2040 ( - 49 ) | 0.4971 ( - 596 ) | 12.0024 |
Table 5: f 2 ( x ) = e - 5 x ( x - 2 ) ( x 10 + x + 2 ) , α = 2 , and x0 =2.2 .
Methods | | x 1 - α | | | x 2 - α | | | x 3 - α | | coc |
Without memory |
|
|
|
|
New method (11), γ=-1 | 0.3124 ( - 6 ) | 0.9972 ( - 56 ) | 0.1074 ( - 451 ) | 8.0000 |
KT8, γ=-1 | 0.8055 ( - 6 ) | 0.1413 ( - 52 ) | 0.1268 ( - 426 ) | 8.0000 |
ZLH8, γ=1 | 0.4955 ( - 6 ) | 0.5798 ( - 54 ) | 0.2038 ( - 437 ) | 8.0000 |
LT8, γ=-1 | 0.4052 ( - 6 ) | 0.5819 ( - 55 ) | 0.1053 ( - 445 ) | 8.0000 |
With memory |
|
|
|
|
New method (20), γ=-1 | 0.3124 ( - 6 ) | 0.1213 ( - 81 ) | 0.3093 ( - 985 ) | 11.9822 |
KT8, γ=-1 | 0.8055 ( - 6 ) | 0.2641 ( - 78 ) | 0.9143 ( - 945 ) | 11.9538 |
ZLH8, γ=-0.01 | 0.3943 ( - 6 ) | 0.1889 ( - 80 ) | 0.6308 ( - 971 ) | 11.9817 |
LT8, γ=-0.01 | 0.4532 ( - 6 ) | 0.4774 ( - 76 ) | 0.5848 ( - 853 ) | 11.1023 |
Table 6: f 3 ( x ) = e x 3 - x - cos ... ... ( x 2 - 1 ) + x 3 + 1 , α = - 1 , and x0 =-1.65 .
Methods | | x 1 - α | | | x 2 - α | | | x 3 - α | | coc |
Without memory |
|
|
|
|
New method (11), γ=-1 | 0.2768 ( - 3 ) | 0.3670 ( - 27 ) | 0.3499 ( - 218 ) | 8.0000 |
KT8, γ=-1 | 0.2915 ( - 4 ) | 0.5324 ( - 34 ) | 0.6583 ( - 272 ) | 8.0000 |
ZLH8, γ=1 | 0.2322 ( - 1 ) | 0.5044 ( - 11 ) | 0.2253 ( - 88 ) | 8.0081 |
LT, γ=-1 | 0.1094 ( - 2 ) | 0.1454 ( - 22 ) | 0.1323 ( - 181 ) | 8.0014 |
With memory |
|
|
|
|
New method (20), γ=-1 | 0.2768 ( - 3 ) | 0.2660 ( - 42 ) | 0.1805 ( - 509 ) | 11.9734 |
KT8, γ=-1 | 0.2915 ( - 4 ) | 0.5214 ( - 53 ) | 0.1157 ( - 642 ) | 12.0961 |
ZLH8, γ=-0.01 | 0.6052 ( - 5 ) | 0.2355 ( - 65 ) | 0.4206 ( - 786 ) | 11.9310 |
LT8, γ=-1 | 0.1094 ( - 2 ) | 0.3522 ( - 34 ) | 0.1717 ( - 415 ) | 12.1081 |
We remark the importance of the choice of initial guesses. If they are chosen sufficiently close to the sought roots, then the expected (theoretical) convergence speed will be reached in practice; otherwise, all iterative root-finding methods show slower convergence, especially at the beginning of the iterative process. Hence, a special attention should be paid to finding good initial approximations. We note that efficient ways for the determination of initial approximations of great accuracy were discussed thoroughly in the works [12-14].
5. Conclusions
We have constructed two families of iterative methods without memory which are optimal in the sense of Kung and Traub's conjecture in this paper. Our proposed methods do not need any derivative.
In addition, they contain an accelerator parameter which raises convergence order without any new functional evaluations. In other words, the efficiency index of the three-step with memory hit 121/4 [approximate]1.861 .
We finalize this work by suggesting some points for future researches: first developing the proposed methods for some matrix functions, such as the ones in [15, 16], second exploring its dynamic or basins of attractions, and lastly extending the developed methods with memory using two or three accelerators.
Acknowledgments
The first author thanks Islamic Azad University, Hamedan Branch, for the financial support of the present research. The authors are also thankful for insightful comments of three referees whom helped with the readability and reliability of the present paper. The fourth author also gratefully acknowledges that this research was partially supported by the University Putra Malaysia under the GP-IBT Grant Scheme having project number GP-IBT/2013/9420100.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
[1] N. Kyurkchiev, A. Iliev, "On some multipoint methods arising from optimal in the sense of Kung-Traub algorithms," Biomath , vol. 2, no. 1, 2013.
[2] T. Lotfi, F. Soleymani, S. Sharifi, S. Shateyi, F. K. Haghani, "Multi-point iterative methods for finding all the simple zeros in an interval," Journal of Applied Mathematics , vol. 2014, 2014.
[3] J. F. Traub, G. W. Wasilkowski, H. Wozniakowski Information, Uncertainty, Complexity , Addison-Wesley, 1983.
[4] M. S. Rhee, Y. I. Kim, "A general class of optimal fourth-order multiple-root finders without memory for nonlinear equations," Applied Mathematical Sciences , vol. 7, pp. 5537-5551, 2013.
[5] H. Wozniakowski, J. F. Traub, "Maximal order of multipoint iterations using n evaluations," Analytic Computational Complexity , pp. 75-107, Academic Press, New York, NY, USA, 1976.
[6] H. T. Kung, J. F. Traub, "Optimal order of one-point and multipoint iteration," Journal of the Association for Computing Machinery , vol. 21, pp. 643-651, 1974.
[7] Y. H. Geum, Y. I. Kim, "An optimal family of fast 16th-order derivative-free multipoint simple-root finders for nonlinear equations," Journal of Optimization Theory and Applications , vol. 160, no. 2, pp. 608-622, 2014.
[8] J. F. Traub Iterative Methods for the Solution of Equations , Prentice-Hall, New York, NY, USA, 1964.
[9] J. M. Ortega, W. C. Rheinboldt Iterative Solution of Nonlinear Equations in Several Variables , Academic Press, 1970.
[10] Q. Zheng, J. Li, F. Huang, "An optimal Steffensen-type family for solving nonlinear equations," Applied Mathematics and Computation , vol. 217, no. 23, pp. 9592-9597, 2011.
[11] T. Lotfi, E. Tavakoli, "On a new efficient Steffensen-like iterative class by applying a suitable self-accelerator parameter," The Scientific World Journal , vol. 2014, 2014.
[12] F. Soleymani, S. Shateyi, "Two optimal eighth-order derivative-free classes of iterative methods," Abstract and Applied Analysis , vol. 2012, 2012.
[13] F. Soleymani, D. K. R. Babajee, "Computing multiple zeros using a class of quartically convergent methods," Alexandria Engineering Journal , vol. 52, no. 3, pp. 531-541, 2013.
[14] F. Soleymani, "Some high-order iterative methods for finding all the real zeros," Thai Journal of Mathematics , vol. 12, pp. 313-327, 2014.
[15] F. Soleymani, E. Tohidi, S. Shateyi, F. K. Haghani, "Some matrix iterations for computing matrix sign function," Journal of Applied Mathematics , vol. 2014, 2014.
[16] F. Soleymani, P. S. Stanimirovic, S. Shateyi, F. K. Haghani, "Approximating the matrix sign function using a novel iterative method," Abstract and Applied Analysis , vol. 2014, 2014.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2014 T. Lotfi et al. T. Lotfi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Two families of derivative-free methods without memory for approximating a simple zero of a nonlinear equation are presented. The proposed schemes have an accelerator parameter with the property that it can increase the convergence rate without any new functional evaluations. In this way, we construct a method with memory that increases considerably efficiency index from [superscript]81/4[/superscript] [approximate]1.681 to 1[superscript]21/4[/superscript] [approximate]1.861 . Numerical examples and comparison with the existing methods are included to confirm theoretical results and high computational efficiency.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer