Comparative Study of Numerical Methods for Solving Non-linear Equations Using Manual Computation

This paper aims at comparing the performance in relation to the rate of convergence of five numerical methods namely, the Bisection method, Newton Raphson method, Regula Falsi method, Secant method, and Fixed Point Iteration method. A manual computational algorithm is developed for each of the methods and each one of them is employed to solve a root finding problem manually with the help of an TI inspire instrument. The outcome of the computations showed that all methods converged to an exact root of 1.56155, however the Bisection method converged at the 14th iteration, Fixed Point Iterative Method converged at 7th iteration, Secant method converged at the 5th iteration and Regula Falsi and Newton Raphson methods converged at the 2nd iteration, suggesting that Newton Raphson and Regula Falsi methods are more efficient in computing the roots of a nonlinear quadratic equation.


Introduction
When real life problems are modelled into mathematical equations in an attempt to solve a problem, the equations are often linear or nonlinear in nature. The roots of the equations give the final result to the problem under study. This means that using the most efficient numerical method for root finding problems is very important in mathematical computations since obtaining an accurate result is important in problem solving. The Root -Finding Problem is the problem of finding a root of the equation f(x) = 0, where f(x) is a function of the single variable x. Given a function f(x) = 0 and x = such that f( ) = 0, then is a root of f(x). Root -Finding Problems arise in several fields of studies including Engineering, Chemistry, Agriculture, Biosciences and so on. This is as a result of the fact that unknown variables will always appear in problem formula involving real life problems. Relevant situations in Physics where such problems are needed to be solved include finding the equilibrium position of an object, potential surface of a field and quantized energy level of confined structure [2]. As a matter of fact, the determination of any unknown appearing implicitly in scientific or engineering formulas, gives rise to root finding problem [3].
Some numerical methods for solving root -finding problems include; Bisection method, Newton Raphson method, Regula Falsi method, Secant method and Fixed Point Iteration. The rate of convergence could be linear, quadratic or otherwise. The higher the order, the faster the method converges [6]. Several investigations have been carried out by different authors all in an attempt to finding the right methods of solving root -finding problems. One of such researches by Ehiwario et al (2014), investigated the effectiveness of Newton Raphson, Bisection and Secant methods in solving a root finding problems [6]. Prior to Ehiwario et al (2014) investigation, Srivastava et al (2011) carried out a comparative study between Bisection, Newton Raphson and Secant methods to find out the method with the least number of iterations when applied to solve a single variable nonlinear equation [11]. Other numerical methods for solving root -finding problems such as Regula -Falsi and Fixed Point Iteration methods have applied in other researches to solve root -finding problems [5,8]. The Equations Using Manual Computation purpose of the study is to compare the number of iterations needed by a given numerical method to reach a solution and the rate of convergence of the methods. In order to achieve the aim of this study, manual computational algorithms for each of the methods are used to find the root of a function.
A similar research conducted compared four iterative methods for solving non-linear equations that assist to present a better performing method [9]. MATLAB results were delivered to check the appropriateness of the best method. With the help of the approximate error graph, the Newton's method was the most robust for solving the nonlinear equation. Besides, Newton's method gave a lesser number of iterations compared to the others and it showed less processing time.

Background of Newton Raphson Method
Newton Raphson method can be derived using several procedures, notable amongst them are by using the concept of finding the slope of a function and using the Taylor series.

Concept of Slope of a Function
Derivation of Newton Raphson method using the concept of slope of a function begins with an assumption that a tangent to a curve that cuts the x -axis gives an idea leading to the computation of likely solutions to the equation of the curve, as illustrated in the diagram below.   The method ensures that each iterative solution is updated at every point.

Termination Condition for Newton Raphson Method
The iterative process of Newton Raphson method is terminated at the point when approximate relative error is less than a certain threshold. Hence the relative error formula is given by

Using Taylor Series
Newton Raphson method can also be derived from Taylor Series. This method has proven to be more useful in computations which has to do with error analysis since Taylor Series gives an easier approach to evaluation of approximation error.
When given a function , its updated value in Taylor Series form is written as follows the point of intersection of the curve with the x−axis in figure  1, shows that the function = 0, hence equation (3)

Algorithm for Newton Raphson Method
In order to apply Newton Raphson method to find the root of a nonlinear equation: Step 1: Write the equation in the form f(x) = 0 and find Step 2: Test for values of x using the function f(x) to obtain the range ( , ) in which the real root lies.
Step 3: Select in step 2 as the initial value and solve for f( and ( ) Step 4: Use values obtained in step 3 to compute h 0 , that is and hence .
Step 5: Use the new value to repeat the process in steps 3 and 4 to obtain the value for . Stop when values for x n+1 in two different iterations are the same. That value is the real root of the function f(x).

Convergence of Newton Raphson Method
Assuming that , , , … , is a sequence of approximations to a root δ obtained by a numerical method, for some p and some non-zero constant K, the numerical method has an order of convergence p and K defines the asymptotic error constant. The value of p is directly proportional to the rate of convergence. Discarding * and the higher power of n, we get Taking limit as , → ∞ of both sides of the equation above.
In the case of Newton Raphson method, we assume that the function ( ) can be differentiated twice, and δ is a root of the function. Substituting + ' = into the function f(x n ) and expanding (' + ) and (' + ) ) using Taylor's series about the point δ where f(δ) = 0 we get Hence for the root δ, Newton Raphson method has second order -quadratic convergence and an asymptotic error constant of 6 (2)6 | (2)| . This means that the square of the previous error n is proportional to the subsequent error .

Background of Bisection Method
The concept of Bisection method is based on Bolzano's theorem on continuity. Given a function f(x) = 0 such that its root lies in [a, b]. If f(x) is real and continuous and f(a). f(b) < 0, then there is at least one root between a and b. This method is classified under bracketing methods because two initial guesses for the root are required. As the name implies, these guesses must "bracket," or be on either side of, the root. The particular methods described herein employ different strategies to systematically reduce the width of the bracket and, hence, home in on the correct answer [10].

Algorithm for Bisection Method
Step 1: Choose a and b as the initial guesses such that f(a)f(b) < 0.
Step 2: Compute the midpoint of a and b such that Step Step 5: Go back to step 3.

Convergence of Bisection Method
In the case of Bisection method, suppose that an algorithm produces iterates that converge as lim →% = ' , if there exists a sequence 9 that converges to zero and a positive constant K, such that = | − '| ≤ ;|9 | it implies that x n is said to converge with rate y n . Thus in the case of Bisection method | − '| ≤ |< − =| Hence the Bisection method has a convergence rate of with |b − a| as the asymptotic convergence constant, that K = |b − a|.

Background of Regular Falsi Method
Regula Falsi method is based on the rational of similar triangles. Its main novelty is that it can be used to compute both zeros and extrema through a single interpolation formula generalized [7]. This method is based on the assumption that the graph of y = f(n) in the small interval [a n , b n ] can be represented by the chord joining (a n , f(a n )) and (b n , f(b n )). This implies that at the point x = x n = a n + h n , at which the chord meets the x -axis, we obtain two intervals [a n , x n ] and [x n , b n ], one of which must contain the root α, depending upon the condition f(a n ) f(x n ) < 0 and f(x n )f(b n ) < 0.
The general Regula Falsi method recurrence relation is given by:

Algorithm for Regula Falsi Method
Step 1: Find point a n and b n such that a n < b n and f(a n )f(b n ) < 0.
Step 2: Take the interval [a n , b n ] and determine the next value of x n .
Step 3: If ( )= 0 then x n is an exact root, else if f(x n ) f(b n ) < 0 then let a n = x n , else if f(a n )f(x n ) < 0, then let b n = x n .
Step 4: Repeat steps 2 and 3 until f(x n ) = 0 or |f(x n | ≤ β, where β is the degree of accuracy.

Convergence of Regula Falsi Method
Given an interval (a n , b n ) such that the function f(x) in the equation f(x) = 0 has its root, then one of the points a n or b n is always fixed and the other point varies with n [8]. In the case where a n is fixed, then the function ( ) is approximated by a straight line which will pass through the points (a n , f(x n )) and (x n , f(x n )), for n = 1, 2,...  Where ) ) is the asymptotic error constant. Therefore, the Regula Falsi Method has a linear rate of convergence (Hassan, 2016).

Background of Secant Method
One critical challenge Newton Raphson method is faced with, which serves as a de-motivation for its usage is the fact that the derivative of the function given must always be found before proceeding to find the root of the function. In some cases, there are functions where finding their derivatives is either extremely difficult (if not impossible) or time consuming [4]. The only way to avoid such problems is to approximate the derivative by knowing the values of the function at point and the previous approximation. Hence knowing f(x n ) and f(x n−1 ), the derivative of f(x n ) which is can be approximated as: Substituting equation (8) above into the general Newton Raphson equation (6), we will get ≈ ? ? (10) Therefore the Secant method can be expressed as: (11) where ? ?

Algorithm for Secant Method
Step 1: Write the equation in the form f(x) = 0 Step 2: Test for values of x using the function f(x) to obtain the interval (x n , x n−1 ) in which the real root lies.
Step 3: Compute f(x n ) and f(x n−1 ) Step 4: Substitute result of step 3 into and find h n Step 5: Substitute result from step 4 into and find the approximated value of x n+1 Step 5: Repeat steps 3 and 4 new values of x n+1 until x n+1 = x n .

Convergence of Secant Method
In the case of the Secant method, lets assume that δ is the root of the function f(x) = 0. Substituting ' + into equation (10)  and this can be written as where the higher order of n is discarded and the error constant . With the definition of convergence in mind, we want a relation which is in the form: where D and j are to be computed. If can be written as in equation (13) Comparing the powers of n in both sides of equation (16), wec get J = 1 + G which can be expressed as Solving for the root of j from equation (17) using the general quadratic formula where a = 1, b = −1 and c = −1, we get J = (1 ± O5) dicarding the negative sign, we obtain the rate of convergence for the Secant Method to be j = 1.618.

Background of Fixed Point Iteration Method
The Fixed Point Iteration Method is another root -finding numerical method used to approximate solutions the equation f(x) = 0. To start with, we rewrite the above function in the form x = φ(x), this way any solution of the function f(x) = 0 with a fixed point φ can also be considered as a solution of x = φ(x). Hence the general recursive iterative process for Fixed Point Iteration Method is given by

Algorithm for Fixed Point Iteration Method
To find the fixed point of φ in an interval [a, b], given the equation x = φ(x) with an initial guess ∈ R=, <S Step 1: Write the equation in the form f(x) = 0 Step 2: Intialize with guess x 0 at n = 0 Step 3: Set x n+1 = φ(x n ) Step 4: If | − | > , set n = n + 1 and go to step 3.
Step 5: Stop when x n = x n+1

Convergence of Fixed Point Iteration Method
Given b]. Suppose in addition that, U exists on (a, b) and that a constant 0 < K < 1 exist with

Results and Discussion
The solution of the equation + − 4 = 0 using Bisection method, Newton Raphson method, Regula Falsi method, Secant method and Fixed Point Iteration method is computed using the preamble which starts with writing the given equation in the form ( ) = 0 as shown below This implies that Testing for the possible roots of equation (19), we have (1) = −2, (1.5) = −0.25, (1.6) = 0.16 , implying that the real root of equation (19) lies between 1.5 and 1.6, which are adopted as initial guesses for the rest of the computations.    Table 2 indicates results obtained when the Bisection method is used to solve the nonlinear equation (19). It gave a final output exact root of 1.56156 but converged after the 14 th iteration.  Table 3 shows results after Regula Falsi method was used to find the root of equation (19). It converged to the solution 1.56155 just as Newton Raphson method. The method converged after the 2 nd iteration.   Table 4 is results obtained after Secant method was used to compute the root of equation (19). The method converged after the 6 th iteration and gave a final output of 1.56155.  Table 5 in the other hand, shows results obtained using the Fixed Point Iteration method to solve the nonlinear equation (19). It converged to an exact root of 1.56155 after the 8 th iteration.

Conclusion
From the results obtained in the computations above, it was observed that using manual computation, Newton Raphson and Regula Falsi methods will converge faster to the solution of the function compared to the other methods. This was clearly seen in the number of iterations taken by each of the methods to converge to the solution. These findings contradicts the findings of some authors who placed the Secant method ahead of Newton Raphson method in terms of efficiency [6,11]. Whilst on the other hand, some other authors findings agree with these papers [5,8]. Findings and conclusions of other authors who have equally researched in this area, is summarized together with conclusion of this research as shown below cde > ge > dfe MATLAB 7.8 (Ehiwario, et al., 2014) ge > cde > je Mathematica 9.0 (Srivastava, et al., 2011) ge > cde > je C language