New Predictor-Corrector Iterative Methods with
Twelfth-Order Convergence for Solving Nonlinear Equations
Noori Yasir Abdul-Hassan
Department of Mathematics, College of Education for Pure Sciences, University of Basrah, Basrah, Iraq
Email address:
To cite this article:
Noori Yasir Abdul-Hassan. New Predictor-Corrector Iterative Methods with Twelfth-Order Convergence for Solving Nonlinear Equations. American Journal of Applied Mathematics. Vol. 4, No. 4, 2016, pp. 175-180. doi: 10.11648/j.ajam.20160404.12
Received: May 16, 2016; Accepted: May 31, 2016; Published: June 17, 2016
Abstract: In this paper, we propose and analyze new two efficient iterative methods for finding the simple roots of nonlinear equations. These methods based on a Jarratt's method, Householder's method and Chun&Kim's method by using a predictor-corrector technique. The error equations are given theoretically to show that the proposed methods have twelfth-order convergence. Several numerical examples are given to illustrate the efficiency and robustness of the proposed methods. Comparison with other well-known iterative methods is made.
Keywords: Nonlinear Equations, Predictor-Corrector Methods, Convergence Analysis, Efficiency Index, Numerical Examples
1. Introduction
The problem of solving a single nonlinear equation is fundamental in various branches of science and engineering. Recently, there are many numerical iterative methods have been developed to solve these problems. These methods are constructed by using several different techniques, such as Taylor series, quadrature formulas, homotopy perturbation technique and its variant forms, decomposition technique, variational iteration technique, and Predictor-corrector technique. For more details, see [1-4,6,32]. In this paper, we use the Predictor-corrector technique to construct some new iterative methods based on a Jarratt's method as a predictor with Householder's method and Chun &Kim's method as a correctors. The orders of convergence and corresponding error equations of the obtained iteration formulae are derived analytically to show that our proposed methods have twelfth -order convergences. Each one of these methods requires two evaluations of the function, three evaluations of first-derivative and one evaluations of second-derivative per iteration. Therefore, our proposed methods have the same efficiency index is 12^{1/6} 1.51309. To illustrate the performance of these new methods, we give several examples and a comparison with other well-known iterative methods is given.
2. Preliminaries
Definition 2.1 (see [12, 32]): Let , , n = 0, 1, 2,…. Then the sequence is said to converge to if . If, in addition, there exist a constant c≥ 0, an integer n_{0} ≥ 0 and p ≥ 0 such that for all n>n_{0},
, then is said to be convergence to α with convergence order at least p. If p = 2 or 3, the convergence is said to be quadratic or cubic respectively.
Notation 2.1: Let is the error in the n^{th} iteration. Then the relation
(2.1)
is called the error equation for the method. By substituting for all n in any iterative method and simplifying, we obtain the error equation for that method. The value of p obtained is called order of convergence of this method which produces the sequence .
Definition 2.2 (see [3,6]): Efficiency index is simply defined as
E.I.=p^{1/m} (2.2)
Where p is the order of the method and m is the number of functions evaluations required by the method (units of work periteration).
3. Construction of the Method
In this section, we recall some of the important methods such as Newton’s method, Jarratt’s method, Halley's method, Householder’s method Chun & Kim's method and Jarratt-Halley's method in the following six Algorithms:
Algorithm (3.1): For a given x_{0}, compute approximates solution x_{n+}_{1} by the iterative scheme:
(3.1)
This is the well-known Newton's method, which has a quadratic convergence [5]. Its efficiency is 1.41421.
Algorithm (3.2): For a given x_{0}, compute approximates solution x_{n+}_{1} by the iterative scheme:
(3.2)
Where and . This is known as Jarratt’s fourth-order method [4, 8]. Its efficiency is 1.58740.
Algorithm (3.3): For a given x_{0}, compute approximates solution x_{n+}_{1} by the iterative scheme:
(3.3)
This is known as Halley's method [9,10,11,13,23], which has cubic convergence and its efficiency is 1.44225.
Algorithm (3.4): For a given x_{0}, compute approximates solution x_{n+}_{1} by the iterative scheme:
(3.4)
This is known as Householder’s method, which has cubic convergence [21]. Its efficiency is 1.44225.
Algorithm (3.5): For a given x_{0}, compute approximates solution x_{n+1} by the iterative scheme:
(3.5)
This is the third order method was referred by Chun and Kim [6,14]. Its efficiency is 1.44225.
Algorithm (3.6): For a given x_{0}, compute approximates solution x_{n+}_{1} by the iterative scheme:
(3.6.a)
(3.6.b)
(3.6.c)
Where . This is known as Jarratt-Haley's method [2], which has twelfth-order of convergence. Its efficiency is 1.513086.
Now, we present the following new two predictor-corrector iterative methods which have twelfth-order convergence, based on a combination scheme between Jarratt's method and each one of Householder's method and Chun&Kim's method, by using Algorithm (3.2) as a predictor and Algorithm (3.4) and Algorithm (3.5) as a corrector, for solving the nonlinear equation f(x)=0.
Algorithm (3.7): For a given x_{0}, compute approximates solution x_{n+}_{1} by the iterative scheme:
(3.7.a)
(3.7.b)
(3.7.c)
This is called the predictor-corrector Jarratt-Householder's method (JHHM), which has twelfth-order of convergence. Its efficiency is 1.513086.
Algorithm (3.8): For a given x_{0}, compute approximates solution x_{n+}_{1} by the iterative scheme:
(3.8.a)
(3.8.b)
(3.8.c)
This is called the predictor-corrector Jarratt-Chun&Kim's method (JCKM), which has twelfth-order of convergence. Its efficiency is 1.513086.
4. Convergence Analysis of the Methods
In this section, we compute the orders of convergence and corresponding error equations of the proposed methods (Algorithm (3.7) and Algorithm (3.8)) as follow.
Theorem 4.1: Let be a simple zero of sufficiently differentiable function for an open interval I. If x_{0} is sufficiently close to , then the iterative method defined by Algorithm (3.7) is of order twelve and it satisfies the following error equation:
e_{n+1}= (c_{3}^{4}c_{2}^{3}- 5c_{3}^{3}c_{2}^{5}+ 9c_{3}^{2}c_{2}^{7}- (1/729)c_{3}c_{4}^{3}- 7c_{3}c_{2}^{9}+ (2/729)c_{4}^{3}c_{2}^{2}+ (2/27)c_{4}^{2}c_{2}^{5}+(2/3)c_{4}c_{2}^{8}+ 2c_{2}^{11}- (1/9)c_{3}c_{4}^{2}c_{2}^{3}-(5/3)c_{3}c_{4}c_{2}^{6}
+ (1/27)c_{4}^{2}c_{2}c_{3}^{2}- (1/3)c_{4}c_{3}^{3}c_{2}^{2}+ (4/3)c_{4}c_{3}^{2}c_{2}^{4})e^{12}+ O(e^{13}) (4.1)
Where , k=2, 3, …, and
(4.2)
Proof: Let be a simple zero of f. Then by expanding and in Taylor’s series about , we get
f(x_{n}) ={e+ c_{2}e^{2} + c_{3}e^{3}+ c_{4}e^{4}+ c_{5}e^{5}+…} (4.3)
= {1+ 2c_{2}e+ 3c_{3}e^{2}+ 4c_{4}e^{3}+ 5c_{5}e^{4}+6c_{6}e^{5}+…} (4.4)
From (4.3) and (4.4), we have
= e- c_{2}e^{2}+ (-2c_{3}+2c_{2}^{2})e^{3}+ (7c_{2}c_{3}-3c_{4}-4c_{2}^{3})e^{4}+ (10c_{2}c_{4}-4c_{5}+6c_{3}^{2}-20c_{3}c_{2}^{2}+8c_{2}^{4})e^{5} (4.5)
Also,
= (2/3)e- (2/3)c_{2}e^{2}+ (-(4/3)c_{3}+ (4/3)c_{2}^{2})e^{3}+ ((14/3)c_{2}c_{3}-2c_{4}- (8/3)c_{2}^{3})e^{4}+ ((20/3)c_{2}c_{4}- (8/3)c_{5}
+ 4c_{3}^{2}- (40/3)c_{3}c_{2}^{2}+ (16/3)c_{2}^{4})e^{5}+… (4.6)
Substituting (4.6) and (4.2) into (3.7.a), and simplifying, we have
=α+(1/3)e+ (2/3)c_{2}e^{2}+ ((4/3)c_{3}- (4/3)c_{2}^{2})e^{3}+ (-(14/3)c_{2}c_{3}+ 2c_{4}+ (8/3)c_{2}^{3})e^{4}+ (-(20/3)c_{2}c_{4}
+ (8/3)c_{5}-4c_{3}^{2}+ (40/3)c_{3}c_{2}^{2}- (16/3)c_{2}^{4})e^{5}+… (4.7)
From (4.7), using Taylor's expansion and simplifying, we have
= {1+ (2/3)c_{2}e+ ((4/3)c_{2}^{2}+ (1/3)c_{3})e^{2}+ (4c_{2}c_{3}+ (4/27)c_{4} -(8/3)c_{2}^{3})e^{3}+ ((44/9)c_{2}c_{4}- 32/3)c_{3}c_{2}^{2}
+ (8/3)c_{3}^{2}+ (16/3)c_{2}^{4})e^{4}+ ((52/9)c_{4}c_{3}- (40/3)c_{4}c_{2}^{2}+ (16/3)c_{2}c_{5}- 12c_{2}c_{3}^{2}+ (80/3)c_{3}c_{2}^{3}- (32/3)c_{2}^{5})e^{5} +… } (4.8)
Combining (4.8) and (4.4), using Taylor's expansion and simplifying, we have
= 4+ 4c_{2}e+ (4c_{2}^{2}+4c_{3})e^{2}+ ((40/9)c_{4}+ 12c_{2}c_{3}8c_{2}^{3})e^{3} + (5c_{5}+ (44/3)c_{2}c_{4}-32c_{3}c_{2}^{2}8c_{3}^{2}+ 16c_{2}^{4})e^{4}
+ ((52/3)c_{4}c_{3}- 40c_{4}c_{2}^{2}+ 16c_{2}c_{5}- 36c_{2}c_{3}^{2}+ 80c_{3}c_{2}^{3}- 32c_{2}^{5}+ 6c_{6})e^{5}+ … (4.9)
Also,
= 4+ (8c_{2}^{2}-4c_{3})e^{2}+ (-(64/9)c_{4}+ 24c_{2}c_{3}- 16c_{2}^{3})e^{3}+ (-10c_{5}+(88/3)c_{2}c_{4}- 64c_{3}c_{2}^{2}+ 16c_{3}^{2}+ 32c_{2}^{4})e^{4}
+ ((104/3)c_{4}c_{3}- 80c_{4}c_{2}^{2}+ 32c_{2}c_{5}- 72c_{2}c_{3}^{2}+ 160c_{3}c_{2}^{3}-64c_{2}^{5}-12c_{6})e^{5}+ … (4.10)
Dividing (4.9) by (4.10), using Taylor's expansion and simplifying, we have
= 1+ c_{2}e+ (-c_{2}^{2}+2c_{3})e^{2}+ ( (26/9)c_{4}- 2c_{2}c_{3})e^{3}+ (- (17/9)c_{2}c_{4}- 3c_{3}c_{2}^{2}+ 2c_{2}^{4}+ (15/4)c_{5})e^{4}
+ (-(3/2)c_{2}c_{5}- (44/9)c_{4}c_{2}^{2}+14c_{3}c_{2}^{3}- 9c_{2}c_{3}^{2}- 4c_{2}^{5}+ (19/9)c_{4}c_{3}+ (9/2)c_{6})e^{5}+… (4.11)
Combining (4.5) and (4.11), using Taylor's expansion and simplifying, we have
= e+ (c_{2}c_{3}-c_{2}^{3}- (1/9) c_{4}) e^{4}+ ((20/9) c_{2}c_{4}- (1/4)c_{5}+ 2c_{3}^{2}- 8c_{3}c_{2}^{2}+ 4c_{2}^{4})e^{5} +… (4.12)
Substituting (4.12) and (4.2) into (3.7.b), and simplifying, we have
z_{n}= α+ (-c_{2}c_{3}+c_{2}^{3}+ (1/9) c_{4}) e^{4}+ (-(20/9) c_{2}c_{4}+ (1/4) c_{5}-2c_{3}^{2}+8c_{3}c_{2}^{2}-4c_{2}^{4}) e^{5}+ … (4.13)
From (4.13), using Taylor's expansion and simplifying, we have
= {(-c_{2}c_{3}+ c_{2}^{3}+ (1/9) c_{4}) e^{4}+ (-(20/9)c_{2}c_{4}+ (1/4) c_{5}-2c_{3}^{2}+ 8c_{3}c_{2}^{2}- 4c_{2}^{4})e^{5}+ …} (4.14)
And,
= {1+ (-2c_{2}^{2}c_{3}+2c_{2}^{4}+ (2/9) c_{2}c_{4}) e^{4}- ((40/9) c_{2}^{2}c_{4}-(1/2) c_{2}c_{5}+ 4c_{2}c_{3}^{2}- 288c_{3}c_{2}^{3}+ 144c_{2}^{5}) e^{5}+ …} (4.15)
Also,
= {2c_{2}+ (-6c2c_{3}^{2}+6c_{3}c_{2}^{3}+ (2/3) c_{3}c_{4}) e^{4}- ((40/3) c_{2}c_{3 }c_{4}-(3/2) c_{3}c_{5}+12c_{3}^{3}-48c_{3}^{2}c_{2}^{2}+ 24c_{3}c_{2}^{4}) e^{5}+ …} (4.16)
From (4.14), (4.15) and (4.16), using Taylor's expansion and simplifying, we have
= (-c_{2}c_{3}+ c_{2}^{3}+(1/9) c_{4}) e^{4}+ (-(20/9) c_{2}c_{4}+ (1/4) c_{5}- 2c_{3}^{2}+ 8c_{3}c_{2}^{2}- 4c_{2}^{4}) e^{5}+ … (4.17)
And,
1+ = 1+ (-c_{2}^{2}c_{3}+c_{2}^{4}+(1/9) c_{2}c_{4}) e^{4}- ((20/9) c_{2}^{2}c_{4}- (1/4) c_{2}c_{5}+ 2c_{2}c_{3}^{2}- 8c_{3}c_{2}^{3}+ 4c_{2}^{5}) e^{5}+ … (4.18)
Combining (4.17) and (4.18), using Taylor's expansion and simplifying, we have
= (-c_{2}c_{3}+c_{2}^{3}+ (1/9) c_{4}) e^{4}+ (-(20/9) c_{2}c_{4}+ (1/4) c_{5}- 2c_{3}^{2}+ 8c_{3}c_{2}^{2}- 4c_{2}^{4}) e^{5}+ … (4.19)
Thus, substituting (4.13) and (4.19) into (3.7.c), using Taylor's expansion and simplifying, we have
x_{n+1} = + (c_{3}^{4}c_{2}^{3}- 5c_{3}^{3}c_{2}^{5}+ 9c_{3}^{2}c_{2}^{7}- (1/729)c_{3}c_{4}^{3}- 7c_{3}c_{2}^{9}+ (2/729)c_{4}^{3}c_{2}^{2}+ (2/27)c_{4}^{2}c_{2}^{5}+ (2/3)c_{4}c_{2}^{8} +2c_{2}^{11}
-(1/9)c_{3}c_{4}^{2}c_{2}^{3}- (5/3)c_{3}c_{4}c_{2}^{6}+ (1/27)c_{4}^{2}c_{2}c_{3}^{2}- (1/3)c_{4}c_{3}^{3}c_{2}^{2}+ (4/3)c_{4}c_{3}^{2}c_{2}^{4})e^{12}+ O(e^{13}) (4.20)
Which implies that
e_{n+1 }= (c_{3}^{4}c_{2}^{3}- 5c_{3}^{3}c_{2}^{5}+ 9c_{3}^{2}c_{2}^{7}- (1/729)c_{3}c_{4}^{3}- 7c_{3}c_{2}^{9}+ (2/729)c_{4}^{3}c_{2}^{2}+ (2/27)c_{4}^{2}c_{2}^{5}+ (2/3)c_{4}c_{2}^{8}+ 2c_{2}^{11}
- (1/9)c_{3}c_{4}^{2}c_{2}^{3}- (5/3)c_{3}c_{4}c_{2}^{6}+ (1/27)c_{4}^{2}c_{2}c_{3}^{2}- (1/3)c_{4}c_{3}^{3}c_{2}^{2}+ (4/3)c_{4}c_{3}^{2}c_{2}^{4})e^{12}+ O(e^{13}) (4.21)
This is show that Algorithm (3.7) is twelve-order convergent.
Theorem 4.2: Let be a simple zero of sufficiently differentiable function for an open interval I. If x_{0} is sufficiently close to , then the iterative method defined by Algorithm (3.8) is of order twelve and it satisfies the following error equation:
e_{n+1}= (c_{3}^{4}c_{2}^{3}- (9/2)c_{3}^{3}c_{2}^{5}+ (15/2)c_{3}^{2}c_{2}^{7}- (1/729)c_{3}c_{4}^{3}- (11/2)c_{3}c_{2}^{9}+ (1/486)c_{4}^{3}c_{2}^{2}+ (1/18)c_{4}^{2}c_{2}^{5}+ (1/2)c_{4}c_{2}^{8}
+ (3/2)c_{2}^{11}+ (1/27)c_{4}^{2}c_{2}c_{3}^{2}- (1/3)c_{4}c_{3}^{3}c_{2}^{2}+ (7/6)c_{4}c_{3}^{2}c_{2}^{4}-(5/54)c_{3}c_{4}^{2}c_{2}^{3}- (4/3)c_{3}c_{4}c_{2}^{6})e^{12}+ O(e^{13}) (4.22)
Proof. Similar procedure, to the proof of theorem 4.1, can be applied to analyze the convergence of Algorithm (3.8).
5. Numerical Examples
In this section, we present the results of numerical calculations on different functions and initial points to demonstrate the efficiency of proposed methods, Jarratt- Householder's method (JHHM) and Jarratt- Chun&Kim's method (JCKM). Also, we compare these methods with the classical Newton’s method (NM) and other methods, as Jarratt's method (JM), Halley's method (HM), Householder's method (HHM), Chun&Kim's method (CKM), and Jarratt-Halley's method (JHM). All computations are carried out with double arithmetic precision. We use the stopping criteria |x_{n+1} − x_{n}| < ϵ and |f (x_{n+1})|< ϵ, where ϵ = 10^{−}^{15}, for computer programs. All programs are written in MATLAB.
Different test functions and their approximate zeros x^{*} found up to the 15^{th} decimal place are given in Table 1, the efficiency index (E.I.) of various iterative methods is given in Table 2 and the number of iterations (NITER) to find x^{*} is given in Table 3. NC in Table 3 means that the method does not converge to the root x^{*}.
6. Conclusion
In this paper, we presented new two predictor-corrector iterative methods with twelfth-order convergence for solving nonlinear equations, which are based on the Jarratt's method, Householder's method and Chun & Kim's method. The proposed methods have the same efficiency index is equal to 1.513086. From numerical experiments we show that our methods are efficient, robust and faster convergence in comparison with classical Newton's method and some other methods.
References