Applications of the Moore-Penrose Generalized Inverse to Linear Systems of Algebraic Equations

In this work, we consider linear systems of algebraic equations. These systems are studied utilizing the theory of the Moore-Penrose generalized inverse or shortly (MPGI) of matrices. Some important algorithms and theorems for computation the MPGI of matrices are given. The singular value decomposition (SVD) of a matrix has a very important role in computation the MPGI, hence it is useful to study the solutions of overand under-determined linear systems. We use the MPGI of matrices to solve linear systems of algebraic equations when the coefficients matrix is singular or rectangular. The relationship between the MPGI and the minimal least squares solutions to the linear system is expressed by theorem. The solution of the linear system using the MPGI is often an approximate unique solution, but for some cases we can get an exact unique solution. We treat the linear algebraic system as an algebraic equation with coefficients matrix A (square or rectangular) with complex entries. A closed form for solution of linear system of algebraic equations is given when the coefficients matrix is of full rank or is not of full rank, singular square matrix or non-square matrix. The results are taken from the works mentioned in the references. A few examples including linear systems with coefficients matrix of full rank and not of full rank are provided to show our studding.


Introduction
The system of equations = ; ∈ ℂ × , ∈ ℂ , ∈ ℂ (1) has been exploited in several contexts. It is consistent, i.e., has a solution for if and only if is in the range of , if is not in the range of then − is nonzero for all ∈ ℂ , so we will go to find an approximate solution of (1), i.e., we want to find a vector making − closest to zero. If the coefficients matrix is square, that means, the number of equations is equal to the number of unknowns, in this a case either is of full rank or is not of full rank. If is square and of full rank, that means, is invertible, so there exists unique solution of the system = .
If is an arbitrary matrix in ℂ × , then is a vector with components and is a vector with components. If is greater than , that means, there more equations than unknowns, in this a case (1) is called over-determined system and has no solution. Conversely, if is greater than , that means, there more unknowns than equations, in this a case (1) is called under-determined system and has infinite number of solution. Even though > or > the linear system (1) still has a natural unique solution, it`s called "least squares solution" [1][2][3][4][5]. The concept of the Moore-Penrose generalized inverse of matrices has been explained in many references [1,[6][7][8][9][10][11][12]. We will use the MPGI to solve the linear systems of algebraic equations = with coefficients matrix [13][14][15][16]. A solution utilizing the MPGI is the minimal least squares solution. When belongs to the range of ( ∈ ( then the notions of solutions using the MPGI and least squares solution coincide.
In this paper, * is the transposed conjugate complex of . If is a real matrix then * reduces to the transposed . The trace of is denoted by ( , the zero matrix is denoted by . I always denotes an identity matrix.

Significance of the Study
The concept of the Moore-Penrose generalized inverse of matrix has been exploited recently in many contexts. It presents solutions for many singular problems in linear algebra, especially for linear singular algebraic systems. The contribution of this study is that: 1) Explaning the relationship between the MPGI and the least squares solutions. 2) Distinguish the importance of using the MPGI to solve the linear systems of an algebraic equations. 3) It gives clue to depth study of the MPGI to get an exact solutions instead an approximate solutions. 4) It invites other researchers interested to find other forms for the MPGI and using them to solve the linear algebraic systems, and hence getting unique exact solutions instead approximate solutions.

The Moore-Penrose Generalized Inverse
In this section, we introduction some of important definitions, algorithms, and theorems.
Definitions 3.1: If ∈ ℂ × , then ∈ ℂ × is unique, and it is called the Moore-Penrose generalized inverse of if it satisfies the following condition: The next theorem is useful to compute when has a full rank factorization.

Definitions 3.2:
A matrix ) ∈ ℂ × which has rank is said to be in row echelon form if ) is of the form where the elements -&' of (= × satisfy the following conditions: 2) The first non-zero entry in each row of is 1.
3) If -&' = 1 is the first non-zero entry of the /th row then the 0th column of is the unit vector 1 & whose only non-zero entry is in the /th position. Definitions 3.3: A matrix 2 ∈ ℂ × is said to be in hermite echelon form if it`s elements 3 &' satisfies the following conditions: Note that, for ∈ ℂ × can always be row reduced to a hermite form 2 7 by using elementary row operations on .
Algorithm 3.1: To obtain the MPGI of a square matrix: 1) Row reduce * to its hermite form 2 7 * (or its hermite echelon form 2 7 * ). and comput < ! . 6) Place the first rows of < ! (in the same order as they in < ! ) in a matrix called . 7) Compute as = 9 . Note that, it is easy to use this algorithm for non-square matrices. That is by adding zero rows or zero columns to construct a square matrix, after that find MPGI for the result matrix [ ⋮ ], where

Algorithm 3.2:
To obtain the full rank factorization and the MPGI for ∈ ℂ × : 1) Reduce to row echelon form ) 7 .
2) Select the distinguished columns of (they are the columns which correspond the columns 1 ! , 1 ( , … , 1 in ) 7 ) and place them as the columns in a matrix in the same order as they appear in .

Least Squares Solutions
The name "least squares" comes from the definition of the We consider the problem of finding solution ; to (1). If (1) is inconsistent, we have to finding ; that makes ; − as small as possible.
Next, we have an important theorem. It relates minimal least squares solutions of (1) and the MPGI of , it also answers our question: what kind of answer is ? Theorem 4.1: Suppose that ∈ ℂ × and ∈ ℂ . Then is the minimal least squares solution to = . Note that, the minimal least squares solutions is called the approximate solution to (1) [3]. If L ∈ ℂ × , = has a solution = L is the minimal least squares solution then L = .

Solutions of Linear Algebraic System
This section deals with the linear algebraic systems and their solution techniques. The solution of these systems is the unknown vector ∈ ℂ which satisfies those systems, but in some cases, we can not find such , so we search for the ; ∈ ℂ that makes ; is closest to . In other words, we search for the ; ∈ ℂ that makes ‖ − ;‖ as small as possible [2].

Linear Systems with Coefficients Matrix Is of Full Rank
In this kind of linear systems, we study three cases: 1) ∈ ℂ × is square matrix of full rank [17], that is ( = = . In this a case, the unique solution of = is 2) ∈ ℂ × is non-square (rectangular) matrix, ( = < , this means, is of full column rank, so it has left inverse satisfies ≃ I, A ≠ :. In this a case there is unique solution to (1), it is the least squares solution which is on the form: 3) ∈ ℂ × is also non-square matrix but ( = < , this means, is of full row rank and has right inverse satisfies ≃ I, ≠ :. In this a case there is unique solution to (1) is the least squares solution, it is:

Linear Systems with Coefficients Matrix Is Not of Full Rank
When the coefficients matrix is not of full rank (square or non-square), we can not use the last forms (in subsection 5.1) of . There are other forms for but the solution still = . These forms are: A. The first form: We use this form when has a singular value decomposition [5,20], that is = R S P * , where R ∈ ℂ × and P ∈ ℂ × are unitary matrices [18], Σ ∈ ℂ × is a diagonal matrix. Note that, this method is slow.
B. The second form: when we solve this equation we will get multiple solutions using arbitrary parameter (parameters), if we choose that parameter to equal zero, then we will get an exact solution. In other words, satisfies ‖ − ‖ = 0 or = [13]. Note that, the solution we get using the third form is an exact solution.

Numerical Examples
In this section, we give some examples to show our work. Example 6.1 consider the system ! + 3 ( = 17 5 ! + 7 ( = 19 11 ! + 13 ( = 23 We can write this system as in (1) Note that, such solution is an approximate solution, it is the least squares solution. Note that = RΣ P * , hence = P Σ R * . That is The solution to the given system is Note that, the solution using the SVD is an exact solution. Example 6.3 In this example we will show how to find the solution of algebraic system using the third form and get exact solution. Let us consider the following system:

Conclusion
Our study shows the importance of using the MPGI of the matrices for solving the linear system of algebraic equations = . The relationship between the MPGI and the least squares solutions was given. Our study also explaines how to solve the linear algebraic system using the MPGI, in other words, which the form of the MPGI we can choose, where we proposed of the some forms for the MPGI of a matrix when is of full rank and is not of full rank. We explained how to use the third form to obtain an exact solution. An approximate solution was often used.