On Optimal Parameter Not Only for the SOR Method

The Jacobi, Gauss-Seidel and SOR methods belong to the class of simple iterative methods for linear systems. Because of the parameter ω, the SOR method is more effective than the Gauss-Seidel method. Here, a new approach to the simple iterative methods is proposed. A new parameter q can be introduced to every simple iterative method. Then, if a matrix of a system is positive definite and the parameter q is sufficiently large, the method is convergent. The original Jacobi method is convergent only if the matrix is diagonally dominated, while the Jacobi method with the parameter q is convergent for every positive definite matrix. The optimality criterion for the choice of the parameter q is given, and thus, interesting results for the Jacobi, Richardson and Gauss-Seidel methods are obtained. The Gauss-Seidel method with the parameter q, in a sense, is equivalent to the SOR method. From the formula for the optimal value of q results the formula for optimal value of ω. Up to present, this formula was known only in special cases. Practical useful approximate formula for optimal value ω is also given. The influence of the parameter q on the speed of convergence of the simple iterative methods is shown in a numerical example. Numerical experiments confirm: for very large scale systems the speed of convergence of the SOR method with optimal or approximate parameter ω is near the same (in some cases better) as the speed of convergence of the conjugate gradients method.


Introduction
The solution of linear systems is a fundamental problem in numerical analysis, especially, if the system has a very large scale. Such systems appear while solving partial differential equations, nonlinear systems or optimization problems. The literature in this area is very rich, for example, Koutis et al. [1] present a very interesting fast algorithm for solving linear systems using the techniques of graph theory, but their method can be used only for symmetric diagonally dominant matrices. In solving the linear systems, preconditions play a very important role. Because of preconditions, the algorithm proposed by Boman and Scott [2] for elliptic finite element systems is near-linear. Hogg and Scott [3] apply a direct method to obtain an approximate solution and then use an iterative method to improve its accuracy. Here, we return to the classical simple iterative methods, such as the following: the Jacobi method, the Richardson method, the Gauss-Seidel method and the Successive Over-Relaxation (SOR) method. The theoretical basis of the simple iterative methods can be found in [4][5][6][7]. Broyden [8] gives the theorem about convergence of the SOR method. There are many results concerning the optimal parameter for the SOR method. Up to now, the formula for optimal parameter for the SOR method is known only in special cases. For example, in the papers [6,9,10] you can find exact formula for optimal parameter for some discretization of Poisson equation. Here, we give such formula for every positive definite and symmetric matrix. In papers [11][12][13] the authors use an approximate value of the parameter . Nevanlinna [14] explains why the SOR and conjugate gradients methods are essentially equally fast for discretized Laplacians. Additionally, Woźniakowski [15] has proved that under some assumptions Jacobi, Gauss-Seidel and SOR methods are numericly stable.
Let us consider the linear system where , , , and is a positive definite matrix. Let be divided into two parts, i.e. = + , where is a nonsingular matrix and the linear system = , can be solved, at most, in ( ) arithmetical operations. The simple iterative methods are defined as follows: Let = + ! + ", where ! is the diagonal matrix and , " are lower and upper triangular matrices, respectively. If = ! , we obtain the Jacobi method; if = + ! we obtain the Gauss-Seidel method; if = + # $ , (0,2), we obtain the SOR method. Here, we introduce a parameter % to the formula (2) in the following way: where & is the unit matrix. The simple iterative method with the parameter % is defined as: In the analysis of the convergence, the main role plays the matrix or, more precisely, the spectral radius ((') of the matrix '. A sufficient condition for the convergence is ((') < 1. We can consider the following cases: = !. In this case we obtain the Jacobi method with the parameter % and, for sufficiently large %, the method is convergent not only for a diagonally dominated, but also for every positive definite matrix; c. = + !. If % is appropriately chosen, then the effectiveness of the Gauss-Seidel method with % is equivalent to the SOR method; d. is a block-diagonal matrix; e.
is a tri-diagonal matrix. Why is % important in the simple iterative methods? For a sufficiently large %, the matrix + %& is positive definite and even diagonally dominated. In addition, the conditional number  plays an important role in the analysis of the speed of convergence of some of the iterative methods. In Section 2, one can find the theorem regarding the convergence and speed of convergence of the sequence (3). The optimality criterion for the choice of the parameter % is given. In Section 3, the residual method with % is analyzed and the minimal spectral radius is calculated. The results concerning the Gauss-Seidel and SOR methods can be found in Section 4, and formulas for the optimal values of % and are given. In Section 5, the results of some of the numerical experiments are presented. This Section shows how the number of iterations depends on the parameter %. Additionally, a comparison of the speed of convergence for some of the iterative methods is made.
Using (7) and (15) If we assume XX = p for R = 1,2, … , , then the Jacobi method with parameter % is equivalent to the Richardson method, and for both methods we get the same results.

The Gauss-Seidel Method with the Parameter q
Let be expressed as the sum of three matrices where D is a diagonal matrix and , " are lower and upper triangular matrices, respectively. The Gauss-Seidel (GS) method is defined by The successive over-relaxation (SOR) method is also known as the extrapolated Gauss-Seidel method. In this case = q + 1 !r sq 1 ! − ! − "r + t, (0,2) = 0,1,2, … If the parameter is chosen properly, then the effectiveness of the SOR method is greater than the Gauss-Seidel method. Usually, we do not know how to choose the optimal value of . For example, such a result for a special class of matrices is given in [4]. In contrast, the properties of the Gauss-Seidel method with the parameter % can be analyzed easily, and it is possible to calculate the optimal value % and the spectral radius for '.
Let be a symmetric and positive definite matrix. Then the Gauss-Seidel method with the parameter % has the form Of course, the Gauss-Seidel method is convergent if % = 0. We next will use criterion (16) and then Usually, in the analysis of the SOR method (for example [8]), it is assumed that XX = 1 for R = 1,2, … , . If we assume that XX = p, for R = 1,2, … , , then In this case, as the first preconditioner, we propose the matrix ! and solve the system ! = ! or, in the symmetric case, we propose ! F E and solve the system Remark 5. Usually, we do not know B and B of the matrix . In such cases, it is safely to choose % > % ijk than % < % ijk . For example, ‖ ‖ = can be used rather than B , and m can be used rather than B .
Remark 6. The parameter % may be changed at every iteration. If then we should increase the parameter %. It is also possible to apply certain minimization techniques to minimize the last expression.
In the end of this Section, we propose the following theorem regarding the SOR method. . Next, we observe that the rate of convergence of the sequence ? @ is independent of the form of the matrix !; as a result, usually, ! would be a diagonal matrix and its elements could be different.

Numerical Example
Let ≥ 2 and ≤ − 1 be given. Here, we take the system = into consideration, where the matrix is defined as for R, Ž = 1,2, … , and X = 1 for R = 1,2, … , . At every iteration, we use the same stopping criterion where + = − . In Table 1, we can see the character of the dependence of the number of iterations on the parameter %. These results were computed for = 1000, = 30 and for the Richardson method. In this case, the Jacobi method is divergent and because XX = 2 for R = 1,2, … , , the Jacobi method with parameter % gives the same results as the Richardson method in which % is chosen appropriately. Table 2  , d) TDM -the tri-diagonal method with optimal %, e) GS -the Gauss-Seidel method (% = 0), f) GS1 -the Gauss-Seidel method with optimal %, g) GS2 -the Gauss-Seidel method with % = 0.5 (•p‖ ‖ = − p), ( % is an approximation of % ijk for the Gauss-Seidel method -see Remark 5). Because XX = 2, for R = 1,2, … , , this algorithm is equivalent to the new variant of the Gauss-Seidel method (Remark 8), h) CG -the conjugate gradient method.

Conclusions
A new approach to the simple iterative methods is done. Owing to introducing a new parameter q the Jacobi, Richardson, Gauss-Seidel methods are convergent for every linear system with a positive definite matrix. The optimality criterion for the parameter q is given. Thus, interesting results for the Jacobi, Richardson and Gauss-Seidel method are obtained. The Gauss-Seidel method with the parameter %, in a sense, is equivalent to the SOR method. From the formula for the optimal value of % results the formula for optimal value of . Up to present, this formula was known only in special cases. Practical useful approximate formula for optimal value is also given. Numerical experiments confirm: for very large scale systems the speed of convergence of the SOR method with optimal or approximate parameter is near the same as the speed of convergence of conjugate gradients method.