Variants of Chebyshev’s Method with Eighth-Order Convergence for Solving Nonlinear Equations
Muhamad Nizam Muhaijir*, M. Imran, Moh Danil Hendry Gamal
Department of Mathematics, University of Riau, Pekanbaru, Indonesia
To cite this article:
Muhamad Nizam Muhaijir, M. Imran, Moh Danil Hendry Gamal. Variants of Chebyshev’s Method with Eighth-Order Convergence for Solving Nonlinear Equations. Applied and Computational Mathematics. Vol. 5, No. 6, 2016, pp. 247-251. doi: 10.11648/j.acm.20160506.13
Received: October 26, 2016; Accepted: January 7, 2017; Published: February 3, 2017
Abstract: This paper develops the variants of Chebyshev’s method by applying Lagrange interpolation and finite difference to eliminate the second derivative appearing in the Chebyshev’s method. The results of this research show that the modified eight-order method has the efficiency index 1.5157. Numerical simulations show that the effectiveness and performance of the new method in solving nonlinear equations are encouraging.
Keywords: Chebyshev’s Method, Finite Differences, Lagrange Interpolation, Nonlinear Equations, Order of Convergence
The solution of nonlinear equations is very important problem in numerical analysis. One of the problems in nonlinear equations is to determine the roots of the equation f(x) = 0 so that the sophisticated iteration method to solve it is in need. Many iteration methods can be used to solve the problem. Newton’s method is the oldest and widely used method written as
To accelerate the convergence of (1) many authors have modified it as we can see in [6, 8, 10, 11]. If we expand about x = xn using Taylor expansion and ignoring the term of containing the third order, we obtain Chebyshev’s method in the form of
In this paper, we present the combination of Newton’s method and Chebyshev’s method into a three-step iteration method. We also incorporate finite difference to approximate the second derivative in second step and Lagrange interpolation to approximate the first derivative in the third step. The discussion of the new method and their convergence and analysis are carried out in Section 2. Then, in Section 3 we perform numerical simulations using some test functions, and compare the new method with some other methods, such as Newton’s method, Halley’s method, and Chebyshev’s method.
2. Proposed Methods
In this section, for construction of the new iterative method, we use the iterative methods given by equations (1) and (2). We consider the following three-step method
which requires one evaluation of the second derivative of the function. To remove this derivative, firstly we replace in (6) with a finite difference , that is
where yn is equation (1).
Secondly, we approximate f’(zn) by a derivative of Lagrange interpolation polynomial L2(x) passing the points (xn, f(xn)), (yn, f(yn)), and (zn, f(zn)), that is
Simplifying equation (9) yields
Let , and substituting equation (8) into (6) and (10) into (7), we get
We can see that the new scheme (14), (15), and (16) are free from second derivative. For the method defined by scheme (14), (15), and (16), we have the following analysis of convergence.
Theorem 1 Assume that function f is sufficiently differentiable and f has a simple root . If the initial point x0 is sufficiently close to , then the method of iteration in equations (14)-(16) has eighth-order convergence and satisfying the following error equation:
where and .
Proof. Let be a simple root of the equation, then . Furthermore, using Taylor expansion of the about , we obtain
Because equation (17) can be rewritten in the form of
Similarly, carrying out the Taylor expansion again for about , after simplification, we obtain
Dividing (18) by (19) gives us
Substituting (20) into (14), we get
Moreover, the Taylor expansion of about respectively are given by
Repeating the above process, we can find approximation to as follows:
Now dividing (22) by (23), we obtain
From (18), (19), (22) and (23), we get
Using (18) and (22), we obtain
Furthermore, dividing (25) by (26), we have
Substituting (21), (24) and (27) into (15), we obtain
Applying Taylor expansion of about , we get
To obtain the , we substitute (18), (28) and (29) into (11), that is
Using the same strategy, can be obtained by substituting (21), (22), (28) and (29) into (12), that is
To obtain , we substitute (18), (21) and (22) into (13), that is
Using the equations (30), (31) and (32), we get
Dividing (29) by (33), we have
Substituting equations (28) and (34) into (16), we obtain
Putting , then from equation (35) we get
This means that the method defined by scheme (14), (15), and (16) has eighth-order convergence. The proof is completed.
Schemes (14), (15), and (16) require three evaluations of the functions and two of their first derivative per iteration. So that, if we consider the definition of eficiency index as , where p is the order of the method and m is the number of functional evalations per iteration required by the method. We have that the method obtained by schemes (14), (15), and (16) has the efficiency index equal to , which is better than the Newton’s method having the efficiency index, Halley’s method and Chebyshev’s method .
3. Numerical Expriments
In this section some numerical simulations are performed to compare Chebyshev-Lagrange method to some other methods, such as Newton’s method, Halley’s method, and Chebyshev’s method. The functions used are as follows:
We also calculate the computational order of convergence (COC) of the method using the following equation:
The calculation is carried out using software with 800 digits accuracy and tolerance . The stoping criteria of the iteration are and . The value is taken as the exact root .
In Table 1, we give initial value (x0), number of iterations (N), and the computational order of convergence (COC). An asterisk (*) on the number of iterations indicates that the method converges to different roots. Table 1 shows a comparison of the number of iterations and COC several methods to resolve the above functions including Newton’s method (NM), Halley’s method (HM), Chebyshev’s method (CM), and Chebyshev-Lagrange method (CLM) for some given initial values.
Based on Table 1 it is generally known that the CLM has the number of iterations less when compared to the other methods. This means that CLM has a better efficiency in computing process than other methods. From Table 1 we observe that the COC perfectly coincides with the theoretical results at Theorem 1. The results presented in Table 1 show that the CLM has higher convergence order compared to the other methods.
Table 2 shows a comparison of the absolute difference and absolute value of the functions of several methods to resolve the above functions including Newton’s method, Halley’s method, Chebyshev’s method, and Chebyshev-Lagrange method for some given initial values.
Tabel 2. Comparison and .
The computational results presented in Table 2 show that in almost all of cases, the CLM has the absolute values of the function smaller when compared to Newton’s method, Halley’s method, and Chebyshev’s method.
In this paper we present the variants of Chebyshev’s method by removing the second derivative using finite difference. This method requires three functions and two first derivative evaluations per iteration. We have that the order convergence of this method is eight. Analysis of the efficiency shows that this method is better than Newton’s method, Halley’s method, and Chebyshev’s method.