Pure and Applied Mathematics Journal
Volume 5, Issue 4, August 2016, Pages: 113-119

The QR Method for Determining All Eigenvalues of Real Square Matrices

Eyaya Fekadie Anley

College of Natural Science, Department of Mathematics, Arba Minch University, Arba Minch, Ethiopia

Email address:

To cite this article:

Eyaya Fekadie Anley. The QR Method for Determining All Eigenvalues of Real Square Matrices. Pure and Applied Mathematics Journal. Vol. 5, No. 4, 2016, pp. 113-119. doi: 10.11648/j.pamj.20160504.15

Received: November 23, 2015; Accepted: December 3, 2015; Published: July 23, 2016


Abstract: Eigenvalues are special sets of scalars associated with a given matrix. In other words for a given matrix A, if there exist a non-zero vector V such that, AV= λV for some scalar λ, then λ is called the eigenvalue of matrix A with corresponding eigenvector V. The set of all nxm matrices over a field F is denoted by Mnm (F). If m = n, then the matrices are square, and which is denoted by Mn (F). We omit the field F = C and in this case we simply write Mnm or Mn as appropriate. Each square matrix AϵMnm has a value in R associated with it and it is called its determinant which is use full for solving a system of linear equation and it is denoted by det (A). Consider a square matrix AϵMn with eigenvalues λ, and then by definition the eigenvectors of A satisfy the equation, AV = λV, where v={v1, v2, v3…………vn}. That is, AV=λV is equivalent to the homogeneous system of linear equation (A-λI) v=0. This homogeneous system can be written compactly as (A-λI) V = 0 and from Cramer’s rule, we know that a linear system of equation has a non-trivial solution if and only if its determinant is zero, so the solution λ is given by det (A-λI) =0. This is called the characteristic equation of matrix A and the left hand side of the characteristic equation is the characteristic polynomial whose roots are equals to λ.

Keywords: QR Method, Real Matrix, Eigen Value


1. Introduction

Each nxn square matrix A has a value associated with its determinant denoted by det (A), which is useful to solve a system of linear equation and moreover, if there exist a non- zero vector V and scalar such that AV = V, thenis called eigenvalue of matrix A with corresponding eigenvector V. It is bare fact that, AV = V is equivalent to where I is the identity matrix then, by Cramer’s rule we know that, linear system of equation has non-trivial solution if , so that  can be computed from this equation.

Applications of eigenvalue problems play a great role in our real word. One class of applications which has recently gained considerable ground is that related to eigenvalues problems of a matrix. However, the most commonly solved eigenvalue problems today are those issues associated with the vibration analysis of large structures. The vibrational frequencies are therefore determined by the eigenvalues of a symmetric 3x3 matrix. This is an instance of simple eigenvalue problem  that is common in practice. However, Since computing the eigenvalue is equivalent to finding the roots of matrix’s characteristic polynomial, we can see that task is quickly too difficult for larger dimensional matrices/especially for the case of matrices which have more than three dimensions/ even if we know characteristic polynomial and algorithms such as Newton’s method for finding zeros cannot be depended upon produce all the zeros with reasonable speed and accuracy. Fortunately, numerical analysts have found an entirely different ways to calculate eigenvalues of a given square matrix. Among those methods QR method is the most widely used, important, accurate and speedy one. The primary reason that modern implementations of this method are efficient and reliable is that a QR factorization can be used to create each new matrix in the sequence and each QR factorization can be calculated quickly and accurately; it yields easily a new matrix orthogonally similar to the original matrix; and orthogonal similarities tend to minimize the effect of round off error on the eigenvalues.

2. The QR Method

2.1. Householder Transformation

There is no simple way to calculate eigenvalues for a matrix larger than 2x2 dimensions. The method of calculating the characteristic polynomial and then finding its zeros is not good numerically and moreover finding the roots of characteristic polynomial involves taking a determinant which uses large amount of computing time. The QR method is one of the most important methods which used to find eigenvalues of real square matrix. Therefore, the main idea of this chapter is determining all eigenvalues of real square matrix by using QR factorization (where Q is orthogonal and R is upper triangular matrices). To this end, suppose a real square A is given. Let A= Q0R0 be QR factorization of A (where Q is orthogonal and R is upper triangular matrices), and create A1 = R0Q0. Let A1 = Q1R1 be QR factorization of A1 and similarly create A2 = R1Q1, continue this process in the same fashion for. Once Am has been created such that, Am= QmRm, and Am+1= RmQm. Thus, the sequence {Am} will usually converges to something from which the eigenvalues can be computed easily. Moreover, A2 is similar to A and Am+1 is similar to Am and so on. Hence Am and Am+1 have the same eigenvalues as matrix A by definition of similarity of matrices. This process will be stopped when the entries below the main diagonal of current matrix Am are sufficiently small, or if it appears that convergence will not happen. This implies that, QR factorization sequence process can fail to converge or the convergence can be also extremely slow and expensive. If it converges to certain matrix, then the diagonal entries of this current matrix are tends to be eigenvalues of matrix A, if not, it can be modified in order to speed up convergence or to accelerate the rate of convergence of the given real square matrix dramatically we use methods like shifting of origin.

The method of orthogonal triangularization (orthogonal transformation) is analogous to Gaussian elimination. Since orthogonal transformation will not worsen the condition or stability of eigenvalues of a non-symmetric matrix we will attempt to decompose an arbitrary real square matrix A into a product QR, where Q is orthogonal and R is upper triangular matrices.

Now, let us begin our discussion by looking at a special class of orthogonal matrices known as Householder matrices. Let

(1)

This is the general form of Householder matrix.

Example. 1: For n=3, we require

(2)

The matrix U is given by; ; For the particular case,

let , then

Theorem 1: Let U be Householder matrix (U = I-2ww*). Then

i).  U is Hermitian (U*=U)

ii).  U is orthogonal (U*U=I).

The Householder’s matrices will be used to transform a nonzero vector in to a new vector containing mainly zeros. Let b be given,  and suppose we want to produce U of form (1) such that Ub contains zeros in position r+1 through n, for some given . Choose w as in (2). Then the first r-1 columns of b and Ub are the same. To simplify the later work, let m = n-r+1 then

With .

Then our restriction on the form Ub requires the first r-1 components of Ub to be c, and

(3)

Since  is orthogonal the length of d is preserved, and thus

(4)

Define:

(5)

From (3) we have;

(6)

Multiply by  and using implies that, P =

Substituting this in to the first component of (5) give us;

(7)

Choose the sign ofin (4) by Sign (= -sign (). This choice maximizes, and it avoids any possible loss of significance error in the calculation of. The sign foris irrelevant. Havingit is possible to obtain p from (6). Return to equation (5), and using components 2 through m,

(8)

The equation (4) and (6) to (8) completely define w and thus w and U. The operation count is 2m + 2 multiplication and division, and one square root. A sequence of such operation is used to systematically reduce the given matrix to simpler form.

Now, given a real square matrix A, we will show there is an orthogonal matrix Q and an upper triangular matrix R for which,

A = QR  (9)

Let

, where r = 1, 2, 3…,   (10)

Withas is defined in equation (2) with r-1 leading zeros. Writing A interims of its column , then we have, . Pick  and  using the preceding construction (4) to (8) with b=. Then,  contains zeros below the diagonal in its first column. Choose  similarly, so that  will contain zeros in its second column below the diagonal. Note that becausecontains a zero in position one, and  is zero in the first column below position one, thus the product  and  contains the same element in row and column one. Now choose  and  as before in (4) to (8) with b equal to the second column of  by carrying out with each column of A, we obtain an upper triangular matrix,

(11)

If at step r of the construction, all elements below the diagonal of column r are zero, then just choose and go on to the next step. To complete the construction, define,

(12)

This is orthogonal. Then A = QR as desired. It is useful to know to what extent the factorization A = QR is unique?. For non- singular matrix A, suppose

(13)

Then, and must also be nonsingular, and =

The inverse of an upper triangular matrix is upper triangular, and the product of two upper triangular matrices is upper triangular. Thus  is upper triangular. Also the product of two orthogonal matrices is orthogonal; and thus the product is orthogonal. But it is not hard to show that the only upper triangular orthogonal matrices are the diagonal matrices for some diagonal matrix D.

. Sinceis orthogonal, .

Since we are only dealing real matrices, D has a diagonal element equals to +1 or -1 combining the results we have,

(14)

Another practical matter is deciding how to evaluate the matrix R of (11). Let,

(15)

WithIf we calculateand then multiply it byto form, the number of multiplication will be;

There is a much more efficient method for calculating. By rewriting (15) as;

(16)

First calculate ; and then calculate and .

Multiplications, which shows (16) is a preferable way to evaluate each and finally .

2.2. Reduction of a Matrix to Tridiagonal and Hessenberg Matrix Form

Let A be a real square matrix. To find the eigenvalues of A, it is usually first reduced to tridiagonal /if the given matrix is symmetric/or upper Hessenberg matrix form by orthogonal similarity transformations (since similar matrices have the same eigenvalues). Then, the eigenvalues of the reduced matrix can be calculated by using the QR algorithm more efficiently.

Suppose A is a real square matrix. For the orthogonal matrices, we will use the Householder matrices of the above. Let

(17)

With defined as in (2.2), .

The matrix  is similar to The element is unchanged, and will be symmetric. Produce and to obtain, for some . The vector  is the first column of A. use (4) to (8) with m = n-1 and .

For example, from (2.9) with r =1 we have, where,

Having obtained and , post multiplication by will not change the first column of . The symmetry of follows from, .

Since is symmetric, the construction on the first column of A will imply that has zeros in position 3 through n of both the first row and column. Continue this process, and letting,

(18)

With= A. pickto introduce zeros in to position r+2 through n of column r. column one through r-1 will remain undisturbed in calculating , due to the special form of .

Pick the vector in analogy with the above description for . The final matrix is tridiagonal and symmetric.

(19)

If A is non-symmetric, it is reduced to a similar Hessenbeg matrix form using the same algorithm. Reduction to symmetric tridiagonal form requires operations, where as the transformation to Hessenberg matrix form in the non-symmetric case requires operations.

This will be much more convenient form for the calculation of the eigenvalues of A; and the eigenvectors of A can easily be obtained from those of T.

Matrix A is related to T b;  Where

(20)

As before with the QR factorization, we seldom produce Q explicitly, preferring to work with the individual matrices in analogy with (16). For an eigenvector x of A, say Ax, we have,

(21)

If we produce an orthogonal set of eigenvectors for T, then will be an orthogonal set of eigenvectors for A, since Q preserve length and angles.

Example. 2: Use the Householder’s method to reduce the following given matrix A into the tridiagonal form.

Solution: First transformation:

By using equation (4) and (7) we get; and  which implies that, and

Then, by using the formula

. Second transformation:

,  ; Where , then

; Now,is in tridiagonal form.

For an error analysis of this reduction to tridiagonal form, we give some result below. Let the computer arithmetic be binary floating-point with rounding, with t binary digits in the mantissa. Furthermore, assume that all inner products that occur in the calculations are accumulated in the double precision, with rounding to single precision at this completion of the summation. These inner products occur in the variety of places in the computation of T from A. Let denote the actual symmetric tridiagonal matrix that is computed from A using the above computer arithmetic. Let  denote the actual matrix produced in converting , let be theoretical exact version of this matrix if no rounding errors occurred, and let be the exact product of these, an orthogonal matrix.

Let A be a real symmetric matrix of order n. Let be the real symmetric tridiagonal matrices resulting from Householder similarity transformation method (17) to A, as in (18).

Assume the floating point arithmetic used has the characteristic described in the preceding paragraph. Let and be the eigenvalues of A and , respectively arranged in increasing order. Then, , with.

For small and moderate values of n, , the result above shows that the reduction to tridiagonal form is extremely stable operation, with little new errors introduced for the eigenvalues.

2.3. The QR Algorithm

Nowadays, QR method is the most efficient and widely used general method for the calculation of all eigenvalues of a matrix. The QR method is quite complex in both its theory and application, and hence this paper able to give only an introduction to the theory and method.

Given real square matrix A, we know from our previous discussion that, there is a factorization,

A = QR

With R upper triangular and Q orthogonal matrices, with A real, Q and R can be chosen real; and their construction is as we have discussed before. We assume that A is real throughout this discussion. The basic QR algorithm is given as follows.

QR algorithm: Suppose a real square matrix A is given. Let A= Q0R0 be QR factorization of A, and create A1 = R0Q0. Let A1 = Q1R1 be QR factorization of A and create A2 = R1Q1., Continue this process again. For. Once Am has been created such that,

Am= QmRm, and Am+1= RmQm  (22)

Theorem 2: The matrices Am, Qm, Rm in (2.3) and , satisfy:

a)  Am+1 is orthogonally similar to Am. i.e. Am+1 =.

b)  .

c)  .

Example. 3: Let

(23)

Solution: The exact eigenvalues are, .

The iteratesdo not converge rapidly, and only few iterates are given to indicate the qualitative behavior of the convergence.

,

,

The diagonal entries of the current matrix are approximations of eigenvalues of the given matrix.

The elements in the above matrices row-1, column-2 position decrease geometrically with ratio of about 0.64 per iterate, and those in the row-2, column-3 position decrease with a ratio of about 0.42 per iterate. The value in the row-3, column-3 position ofwill be 1.2679, which is correct to four decimal places. The QR algorithm (22) can be relatively expensive because the QR factorization is time consuming when repeated many times. To decrease the expense the matrix is prepared for the QR method by reducing it a simpler form, one for which the QR factorization is much less expensive. If A is symmetric, it is reduced to a similar tridiagonal matrix. If A is non-symmetric, it is reduced to a similar Hessenberg matrix exactly as described in discussion (2). It is upper triangular except for a single non-zero sub diagonal. The matrix A is reduced to Hessenberg matrix form by using the same algorithm as we have done for reducing symmetric matrices to tridiagonal form. With A diagonal or Hessenberg, the Householder matrices take a simple form when calculating the QR factorization. Having produced A1 = Q1R1 and A2 = R1Q1, we have to know that the form ofis the same as that of A1 in order to continue using the less expensive form of QR factorization.

Suppose A1 is Hessenberg form, then from discussion (1) the factorization has following value for Q1:

(24)

With each a Householder matrix (2. 13) of discussion (2.1),

(25)

Because the matrix A1 is of the Heisenberg form, the vectorcan be shown to have the special form;

(26)

This can be shown from the equation for the components of, and in particular (18) of equation (26). From equation (25), the matrixwill differ from the identity in only the four elements in position (k, k), (k, k+1), (k+1, k) and (k+1, k+1). And from this it is a fairly straight forward computation to show thatmust be Hessenberg in form. Another necessary lemma is that the product of a triangular matrix and a Hessenberg matrix is again Hessenberg. Just multiply the two forms of matrices, observing the perspective patterns of zeros, in order to prove this lemma. Combining these results, observing that is upper triangular, we have thatmust be in Heisenberg form. If A1 is symmetric and tri-diagonal, then it is trivially Heisenberg. From the preceding resultmust also be Hessenberg. Butis symmetric since, and since any symmetric Heisenberg matrix is tridiagonal, we have shown thattridiagonal.

Theorem 3: Let A be a real matrix of order n, and let its eigenvalues satisfying that

(27)

Then, iterates  of the QR algorithm, defined in (22), will converge to an upper triangular matrix that contains the eigenvalues  in the diagonal position. If A is symmetric, then the sequence  converges to a diagonal matrix.

2.3.1. The QR Algorithm with Shift/Improvement of Convergence

The basic QR algorithm is too slow for it to be competitive, especially if some of the eigenvalues are close together, but two refinements will accelerate the convergence dramatically. First, we perform a preliminary reduction of the matrix into tridiagonal or upper Heisenberg form, which decreases the computations cost of the future QR iterations. Second, we use multiple translational shift of origin of the matrix to reduce the total number of iterations before convergence. In other word, the QR algorithm is generally applied with a shift of origin for the eigenvalues in order to increase the speed of convergence.

During QR iterations on the matrix , where m is the number of the iteration current step, the diagonal entry  will approach the eigenvalues , assuming we meet the convergence criteria. Using our current methods, the rate of convergence will be linear with the ratio  and observe that if is sufficiently close to , and then we can create a substantial smaller ratio , then the rate of convergence will be better than linear because  tends to zero as .

Now, let us consider sequence of constant be a shift and define  and

;  The matricesare similar tosince

(28)

Thus, is similar to  by definition of similar matrices. Therefore, the eigenvalues of are the same as those of , and hence the same as those of A. To be more specific on the choice of shifts , we will consider only symmetric tridiagonal matrix A.

For let

(29)

For convergence, we can show that one the new coefficients or converges to zero extremely rapidly. If it is , then we will have is converging to an eigenvalue of A. When is sufficiently small, set; and then we delete the last row and column of and continue the same process with the new smaller matrix. When is converging to zero, we end up with a 2x2 matrix and we proceed in much the same manner as before.

Example 4: Let now, let us try to compare the eigenvalues of symmetric matrix A with & without shift/improvement.

The exact eigenvalues that are correct to five decimal places are:

,  and

The first four iteration of matrix A with shift/acceleration are:

As we can observe the entries of , the element  converges to zero extremely rapidly and dramatically, but the element  converges to zero geometrically. Moreover, Since off-diagonal elements are approaching to zero or very small number, the eigenvalues of A are approaching to diagonal entries of .

In contrast, without shift/ improvement of convergence we have,

Similarly, Since the off-diagonal elements of are approaching to zero or small number, the eigenvalues of A are approaching to diagonal elements of .

Now, let us take the diagonal entries of iterationas approximated eigenvalues in both cases (which is computed with and without shift/improvement or acceleration) and let us compare those with exact eigenvalues of the same given square matrix A.

Table 1. Error analysis of eigenvalues with and without shift.

Exact eigenvalues Approximated eigenvalues without shift Approximated eigenvalues with shift Error without Shift Error with Shift
8.33816 8.33213 8.33418 0.00603 0.00398
1.95205 1.75703 1.95205 0.195202 0.00000
-1.29021 -1.09512 -1.28623 0.019509 0.00398

As it is shown in the table (1), the error with shift is smaller as compare to without shift. It is possible to improve the eigenvalues more than this by applying some more shifting methods and hence the round-off error will decrease with shift as compare to without shift. This implies that, shifting technique is very important in QR method to reduce the cost per iteration but QR method barley by itself is too slow, too expensive and because of this reason it is impractical without improvement technique.

2.3.2. The Double QR Algorithm

Now, let us consider the important special case of real square matrix with complex eigenvalues. In order to realize the faster rate of convergence of general QR method (using shifts) on such matrices complex shifts must be employed at some stage. After such stage the matrices will contain complex elements and so will all the extra work involved in treating complex matrices (double storage, at least four times the work). There is thus storage incentive for seeking an algorithm which produces a real matrix from which the complex eigenvalues can be calculated easily as conjugate root of 2x2 real principal sub matrices.

Consider two steps of the generalized QR transformation of general real matrix .

We have,

(30)

Thus, setting and , the formulas above says that where, by uniqueness, is the QR factorization of the non-singular matrix G. It is now clear what should be done. Suppose that is complex shift and that is forced to be  then G will be real, will be its (real) orthogonal factor,  will be real, and the computation of complex will be circumvented. The solution is therefore to use the generalized QR transformation with the providing every shift with positive imaginary part is followed by its conjugate, the intermediate complex matrix being avoided as indicated above. Ignoring the storage of the auxiliary matrix G it turns out the formation of G, its decomposition, and the formation of (one double steps) require at least as much as work as two real QR steps. Note that apart from the decided advantage of keeping the arithmetic real there no extra gain.

3. Conclusion

We have seen that calculating eigenvalues a matrix is an important problem in mathematics and the science but the native approach of solving characteristic polynomial is inefficient for large dimensional matrices. Rather, using the elegant QR methods provides an effective answer. The QR method is one of the most wildly used methods which used to find all eigenvalues of square matrix. Therefore, we are intended to determine all eigenvalues of real square matrix by using QR factorization method. Suppose a square matrix A is given. Now, let A = Q0R0 be QR factorization of A, and then create A1 = R0Q0. Let A1 = Q1R1 be QR factorization of A and create A2 = R1Q1. Continue this process again in the same fashion. For . Once Am has been created such that, Am= QmRm, and Am+1 = RmQm. Thus, the sequence {Am} will usually converges to something from which the eigenvalues can be computed easily. Moreover, A2 is similar to A and Am+1 is similar to Am and so on. Hence Am and Am+1 have the same eigenvalues by definition of similarity of matrices. This process will be stopped when the entries below the main diagonal of current matrix Am are sufficiently small, or if it appears that convergence will not happen. This implies that, QR factorization sequence process can fail to converge or the convergence can be also extremely slow and hence expensive. If it converges to certain matrix, then the diagonal entries of this current matrix are tends to eigenvalues of matrix A, if not, it can be modified in order to improve convergence/ to accelerate the rate of convergence dramatically by using shifting methods.

References

  1. Bronson, Rchard. (1991). Matrix Methods: An introduction. 2nd ed. San Diego: Academic press, inc.
  2. Faires, J. D and R. L. burden., (2002). Numerical Methods 3rd ed. Vol. 2. Pblisher: Broks cole
  3. Horn, R. A. and C. A. Johnson., (1985). Matrix Analysis. 1st ed. Cambridge: Cambridge University.
  4. H. R. saxena. (2000). Finite Difference & Numerical Analysis. Pub S. Chad company LTD
  5. Iyenger S. R. K, jain R. K. (2009), Numerical Methods, New delhi: New age international publishers.
  6. Kres. R. (1998), Graduate Texts In Mathematics, New York: spriner-verlag.
  7. Michelles. S. (1990). A simple proof of convergence of the QR Algorithm for Normal matrices without shifts. IMA Ppreprint series NO 720.
  8. Muzafar F. Hama. (2010). A Technique to Have a Convergence for the QR Algorithm.
  9. International Journal of Algebra, Vol. 6, 2012, no. 2, 65 - 72, University of Sulaimani, College of Science Department of Mathematics, Sulaimani, Iraqhamamuzafar@yahoo.com
  10. Paul Schmitz. (2012). The QR algorithm senior seminar, university of Minnesota Morris spring.
  11. Watkins, Davis S, (2008). The QR algorithm Revised. SIAM Review 50. 1: 133-145.

Article Tools
  Abstract
  PDF(289K)
Follow on us
ADDRESS
Science Publishing Group
548 FASHION AVENUE
NEW YORK, NY 10018
U.S.A.
Tel: (001)347-688-8931