Variants of Chebyshev’s Method with Eighth-Order Convergence for Solving Nonlinear Equations

This paper develops the variants of Chebyshev’s method by applying Lagrange interpolation and finite difference to eliminate the second derivative appearing in the Chebyshev’s method. The results of this research show that the modified eight-order method has the efficiency index 1.5157. Numerical simulations show that the effectiveness and performance of the new method in solving nonlinear equations are encouraging.


Introduction
The solution of nonlinear equations is very important problem in numerical analysis. One of the problems in nonlinear equations is to determine the roots of the equation f(x) = 0 so that the sophisticated iteration method to solve it is in need. Many iteration methods can be used to solve the problem. Newton's method is the oldest and widely used method written as To accelerate the convergence of (1) many authors have modified it as we can see in [6,8,10,11]. If we expand which has third-order convergence [1,3]. Kou et al. [7] proposed a method which is free from second-derivative by approximating ( ) '' n f x in (2) by a finite difference as follows This method has a third-order convergence [8]. The other methods modified from (2) having a four-order convergence can be seen in [2,4,5].
In this paper, we present the combination of Newton's method and Chebyshev's method into a three-step iteration method. We also incorporate finite difference to approximate the second derivative in second step and Lagrange interpolation to approximate the first derivative in the third step. The discussion of the new method and their convergence and analysis are carried out in Section 2. Then, in Section 3 we perform numerical simulations using some test functions, and compare the new method with some other methods, such as Newton's method, Halley's method, and Chebyshev's method.

Proposed Methods
In this section, for construction of the new iterative method, we use the iterative methods given by equations (1) and (2). We consider the following three-step method which requires one evaluation of the second derivative of the function. To remove this derivative, firstly we replace ( ) in (6) with a finite difference [10], that is where y n is equation (1). Secondly, we approximate f'(z n ) by a derivative of Lagrange interpolation polynomial L 2 (x) passing the points (x n , f(x n )), (y n , f(y n )), and (z n , f(z n )), that is Simplifying equation (9) yields where We can see that the new scheme (14), (15), and (16) are free from second derivative. For the method defined by scheme (14), (15), and (16), we have the following analysis of convergence.

Numerical Expriments
In this section some numerical simulations are performed to compare Chebyshev-Lagrange method to some other methods, such as Newton's method, Halley's method, and Chebyshev's method. The functions used are as follows: We also calculate the computational order of convergence (COC) of the method using the following equation: The calculation is carried out using software with 800 digits accuracy and tolerance O = 1.0 × 10 QQ . The stoping criteria of the iteration are | − | and | ( )| . The value is taken as the exact root $. In Table 1, we give initial value (x 0 ), number of iterations (N), and the computational order of convergence (COC). An asterisk (*) on the number of iterations indicates that the method converges to different roots. Table 1 shows a comparison of the number of iterations and COC several methods to resolve the above functions including Newton's method (NM), Halley's method (HM), Chebyshev's method (CM), and Chebyshev-Lagrange method (CLM) for some given initial values.
Based on Table 1 it is generally known that the CLM has the number of iterations less when compared to the other methods. This means that CLM has a better efficiency in computing process than other methods. From Table 1 we observe that the COC perfectly coincides with the theoretical results at Theorem 1. The results presented in Table 1 show that the CLM has higher convergence order compared to the other methods.  The computational results presented in Table 2 show that in almost all of cases, the CLM has the absolute values of the function smaller when compared to Newton's method, Halley's method, and Chebyshev's method.

Conclusions
In this paper we present the variants of Chebyshev's method by removing the second derivative using finite difference. This method requires three functions and two first derivative evaluations per iteration. We have that the order convergence of this method is eight. Analysis of the efficiency shows that this method is better than Newton's method, Halley's method, and Chebyshev's method.