Numerical Solution of Linear and Nonlinear Integral Equations Via Improved Block-Pulse Functions

This paper is concerned with a numerical method based on the improved block-pulse basis functions (IBPFs). It is done mainly to solve linear and nonlinear Volterra and Fredholm integral equations of the second kind. These equations can be simplified into a linear system of algebraic equations by using IBPFs and their operational matrix of integration. After that, the system can be programmed and solved using Mathematica. The changes made to the method obviously improved - as it will be shown in the numerical examples - the time taken by the program to solve the system of algebraic equations. Also, it is reflected in the accuracy of the solution. This modification works perfectly and improved the accuracy over the regular block–pulse basis functions (BPF). A slight change in the intervals of the BPF changes the whole technique to a new easier and more accurate technique. This change has worked well while solving different types of integral equations. The accompanied theorems of the IBPF technique and error estimation are stated and proved. The paper also dealt with the uniqueness and convergence theorems of the solution. Numerical examples are presented to illustrate the efficiency and accuracy of the method. The tables and required graphs are also shown to prove and demonstrate the efficiency.


Introduction
In recent years, there has been a growing interest in the formulation of many engineering and physical problems in terms of integral equations. This has been fostered a parallel rapid growth of the literature on their numerical solution. Therefore, this article gives a further contribution to what is becoming a subject of increasing concern to scientists, engineers, and mathematicians. Moreover, linear/nonlinear integral equations are typically hard to solve analytically, however; some numerical techniques are used to solve these equations.
During the last decade, many attempts have been made to solve linear/nonlinear integral equations by several researchers by using numerical or perturbed methods. The time collocation method and the projection method did a great job in solving it [1]. Others applied an iterative method to solve this equation such as the Lagrange method [2], modified Homotopy perturbation method [3], rationalized Haar function method [4,5], differential transformation method [6], and Tau method [7]. In 1999, He [8] tried to solve linear differential and integral equations by using a new method which is called (HPM). Furthermore, he managed to solve some nonlinear problems [9]. After that, he developed his technique to work on more complicated problems and introduced a solution to a lot of applications [10]. The Adomian decomposition method is used by Wazwaz [11] with a lot of modifications to the method. He managed to solve and get an approximate solution for most cases of that equation.
In this paper, IBPFs are presented. Moreover, some theorems are proved for the IBPFs method which shows some results of these numerical expansions, that are more precise than the results of the block pulse expansions. These functions are disjoint, orthogonal, and complete. According to the disjointness of IBPFs, the joint terms will be disappeared in each subinterval when multiplication, division, and some other operations are applied. Additionally, the orthogonality property of IBPFs will cause an operational matrix to be sparse. The completeness of IBPFs guarantees an arbitrary small mean square error that can be obtained from a real bounded function. This has only a finite number of discontinuous points in the interval x [0, 1) by increasing the number of terms in the improved block pulse series. To crystallize the presentation of the current paper, the rest of it is organized as follows: In Section 2, we describe IBPFs and their properties, also how to apply them to linear and nonlinear different types of integral equations. Convergence analysis is discussed in Section 3. Numerical results are given in Section 4 to illustrate the efficiency and accuracy of the proposed method. Finally, Section 5gives the main outlines of the paper as a conclusion.

The Basic Idea of the New Improved Block Pulse Function (IBPF)
Suppose the integral equations in the forms of the following Fredholm integral equation: or in a nonlinear version where ( ) is the function to be determined, ( ) is an analytical function over the interval [0,1) , ( , ) and ( , ) are called the kernels of the integral equation. They are analytic on [0,1) × [0,1) , r, and q are non-negative integers. All these functions may be represented in a vector form respectively as follows: and The IBPFs were firstly introduced by Farshid Mirzaee [12]. Farshid solved a system of integral equations by using IBPFs. However; in this article, an operator matrix of the Volterra integral equation is modified to get better results. Here linear and nonlinear integral equations will be solved. In addition, mixed Fredholm and Volterra integral equations will be considered. The variable interval block pulse functions are derived from the regular block pulse functions, but with a slight change in interval width. This change caused a huge alteration in the algorithm of the method which we will study in detail. The variable of interval block pulse is a () + 1) agreed of functions defined over the interval [0,1) as follows: 0, , ℎ./0' ., ' = 1,2, . . . , ) − 1 (7) and * ( ) = + 1, ∈ [1 − , 1), where ) is a positive constant represents the number of sub-interval to be decided for the accuracy needed in solving the problem. The IBPFs are disjointas where ', ( = 0,1, … , ) and orthogonal to each other where ∈ [0,1). The first () + 1) terms of the IBPF can be written in vector form Furthermore, one gets It is worthy to note that at the first and last interval = P .
So, it can be approximated by this value where ) = 0 or Q. However, at any of the middle interval − N $2 / O can be approximated by . Then we can deduce that the integration of the vector < ( ) defined in Eq. (11) can be represented approximately as where This operational matrix may be modified and rewritten to get better results than that used in the work of F. Mirzaee, [12]. By using the relation given in Eq. (12), one gets where Suppose ( ) is a continuous function, where ( ) ∈ [ [0,1) and maybe expanded by the IBPFs as: In addition, ( ) can be expanded by the IBPFs as where ! is an () + 1) × 1 vector given by and < ( ) is defined in Eq. (11), and $ are the improved block pulse coefficients and are obtained by

Solution Algorithm for Fredholm Integral Equation
Combining From Eq. (18), one obtains It follows that Therefore, the unknown coefficients $ can be determined from the relation Hence, d 5 is the identity matrix with dimensions () + 1) × () + 1), and e = " Z. It can be calculated by using Eq. (19) as follows: where '( can be computed in the following manner By solving the linear system in Eq. (32), may be obtained to get . (36)

Solution Algorithm for Volterra Integral Equation
Combining Eq.
Assuming that where i $ is the ith row of i, it follows that Therefore, the unknown coefficients $ may be determined from the following relation: where d 5 is the identity matrix with dimensions () + 1) × () + 1), and " S can be calculated by using Eq. (17) as follows: Solving the linear system in Eq. (44) may be obtained to get . (47)

Solution Algorithm for Non-linear Integral Equations
In this section, the non-linear Volterra-Fredholm integral equation given in Eq. (2) will be solved by using BPF. As mentioned in the previous sections, it is seen that ( ) is a function defined over the interval [0,1) maybe expanded as In the same manner, [ ( )] can be approximated in terms of IBPFs Now, the vector j needs to be calculated. So, one gets Therefore, one gets Hence, from Eq.
So, one finds is the inverse of the matrix k 5 , and d 5 is the identity matrix with dimensions () + 1) × () + 1). Now, one finds By using Eq. (51), one gets The last equation can be written as Now, in order to solve the nonlinear Volterra-Fredholm integral equation given in Eq. (2), the following approximations must be used where the ) + 1 vectors , ! , j , j and the () + 1) × () + 1) matrices " and " are the IBPFs coefficients the non-linear equation It is transferring to < ( ) = < ( )! + l < ( )" < ( ). j < ( ) or which will give the following the linear system Solving this system in Eq. (72), can be found and then we can find $ and to get the solution substitute in

Convergence Analysis
In this section, we show that the current method is convergent. Its order of convergence is r N O ,/ r(ℎ). We define and where (g) ∈ [ (w) and (g) is defined as in Eq. (2), and and where (g, ) ∈ [ (w × w) and (g, ) is defined in Eq. (7).
For this purpose, we will need to prove the following theorems: Theorem 1 Let (g) ∈ [ (w) and (g) be the IBPFs expansion of (g) that is defined as where $ ; i=0, 1,..., m, are defined as in Eq. (24). Therefore, the criterion of this approximation mean square error between the functions (g) and (g) in the interval g ∈ was achieves its minimum value and also

Proof
It is an immediate consequence of the theorem which is proved in the work of Jiang and Schaufelberger, [13].
Theorem 2 Supposethat (g) is a continuous on w, differentiable on (0,1), and there exists a positive scalar M such that | z (g)| ≤ {, for every g ∈ w. Then Proof see Ref. [14].
where € $ ∈ d $ , ' = 0,1, … , ). From the above equations and Theorem 2, one gets Afterward, one gets which completes the proof. Suppose that . z (g) is the error between (g) and its BPFs expansion. As in Ref. [13], it is clear that (92)
By using the mean value theorem for integral and similar to the proof of Theorem 3, we get . (100) Directly, we get Suppose . M z (g, ) be the error between (g, ) and its BPFs expansion. From the work of Maleknejad et. al., [15], it is clear that ‖. (g, )‖ ≤ ‖. z (g, )‖. (102)

Theorem 6
Let [ (ℝ) be Helbert space and < $ (g) defined in Eq. (13) form a basis of IBPFs. Let be the solution of Eq. (2) or Eq. (1). Now, we define a sequence of partial sums Ÿ $ of ( < $ (g)). Let Ÿ $ and Ÿ % be the partial sums with ' ≥ (. We have to prove Ÿ $ is a Cauchy sequence in the Hilbert space. Proof Now, We claim that Therefore, From Bessel's inequality, we have It is convergent and hence Hence, we have and Ÿ $ is a Cauchy sequence and it converges to (say). We assert that (g) = . Now, We conclude that Hence (g) = and Ÿ $ = ∑ < $ (g) $ ] converges to (g) as ' → ∞ and proved. The above relation is possible if (130)

Numerical Modeling
This part included some physical models that will be solved by using the current improved technique to demonstrate the reliability and efficiency of these modifications. Furthermore, it includes numerical comparisons between the present method and other similar methods in the algorithm, to show the accuracy of each of them. Some figures and tables might be included in each model for clarification. All methods used in these comparisons are used by many authors to solve many problems.
Example 1 We will start with a Fredholm type integral equations [16] ( ) = . D5 3 where the exact solution is ( ) = . D . Suppose By solving this system of linear equations, the improved block pulse series coefficients can be found. After substituting into Eq. (131), the IBPF approximate solution will be found. Below are the graphs of the improved block pulse approximate solutions at ) 32. Also, the exact solution is graphed on the same axes to see how close is the new method to the exact solution in the selected intervals. The points here are taken as the midpoints of the intervals of the IBPFs that is why the graph of the exact solution absolutely coincides with the graph of the IBPFs. One can notices that the error is huge in this case. However; if we choose random points the functions will be like

Figure 2. A comparison between the exact and approximation of both BPFs and IBPFs solutions.
It is noticed here that at some points the BPFs solution is better and at other points, the IBPFs solution is better. The error, in this case, will be sometimes in the favor of IBPFs and at other points will be in the favor of BPFs as. The following table shows numerical results of the absolute error by using the block pulse method and the improved block pulse method at the midpoint of every subinterval. Also, the error for the present method is compared with the block pulse functions error at some points in the following table at ) 16. For m=32 we will find the following output It is worth noting here is that the midpoint of the intervals is different for each method as each method has different intervals. In the study of Maleknejad, and Mahmoudi, [16], the authors used the hybrid Taylor and block-pulse functions. They got the maximum norm of the error. We notice that at m 32, the IBPFs maximum norm of error is 5.71723 10 2P which is less error and a more accurate solution than that done by using the hybrid Taylor and block-pulse functions. In the study of Maleknejad, and Mahmoudi, [16], the table done is as follows: . Also, " " 5 5 can be found by using the following relation: Then by solving this system of linear equations, the improved block pulse series coefficients can be found. After substituting into Eq. (132), the IBPF approximate solution will be found. Below are the graphs of the improved block pulse approximate solutions at m=32. Also, the exact solution is graphed on the same axes to see how close is the new method to the exact solution in the selected intervals. We notice here that as we increase the number of intervals, the MPBF coincides with the exact solution. The graphs of the block pulse function solution at the same divisions of intervals are also graphed. Now, we can look at the combined graphs of the BPF, IBPF, and exact solutions at m=32. The error, in this case, will be like this One notices that the error is huge in this case. However; if we choose random points the functions will be like this We notice here that at some points the BPFs solution is better and at other points, the IBPFs solution is better. The error in this case also will be sometimes in the favor of IBPFs and at other points will be in the favor of BPFs as follows. Now the following tables show the values of the exact, BPF, and IBPF solutions at different points within the interval 0,1 . Notice that the modification done to the BPF made the absolute error smaller than the regular BPF. Also, it's worth mentioning that it took less time to compute the solution using IBPF than BPF.

Example 3
Now, a non-linear Volterra integral equation will be considered [18], Also, " ( ) = " 5 × 5 can be found by using the following relation: Then by solving this system of linear equations, the improved block pulse series coefficients can be found. After substituting into Eq. (133), the IBPF approximate solution will be found. Now, the following tables show the values of the exact, BPF, and IBPF solutions at different points within the interval [0,1). Notice that the modification done to the BPF made the absolute error smaller than the regular BPF. Notice that the collocation points are taken as the midpoints of the subintervals of the IBPF. . Also, " ( ) = " 5 × 5 can be found by using the following relation: Then we have the system = ! + "Z j , By solving this system of linear equations, the improved block pulse series coefficients can be found. After substituting into Eq. (134), the IBPF approximate solution will be found. Now, the following tables show the values of the exact, BPF, and IBPF solutions at different points within the interval [0,1). Notice that the modification done to the BPF made the absolute error smaller than the regular BPF. Notice that the collocation points are taken as the midpoints of the subintervals of the IBPF. This table shows the errors between exact, BPF, and IBPF solutions. It's obvious that the IBPF solution is better than that of BPF. This method is extended to be coupled with other known methods as done in the work by Ramadan and Osheba, [19]. It gave very accurate results that can be used to develop this work thoroughly.

Conclusion
The IBPFs, also the operational matrices B and V are used to get numerical solutions of linear and nonlinear Volterra and Fredholm integral equations. The mentioned method reduces the integral equations into an algebraic matrix equation. After solving the matrix equation, we can get the solution easily. The operational matrices have many zeros which make them easier to deal with than other methods. When this method is compared to the original BPF method, it shows a high accuracy at the midpoint of its intervals. This accuracy is much better than that of the original technique. It is stated by the graphs in the numerical applications section. Also, the convergence proved in the current article for the proposed method. The absolute error is shown to state the applicability and accuracy of the method. The article is compared with the work done with many other methods to prove the effectiveness and convenience of the method. It is worth mentioning that the method is extended to solve nonlinear Volterra and Fredholm integral equations.