On a New Class of Regular Doubly Stochastic Processes

In this article, we show that the well-known Helmert matrix has strong relationship with stochastic matrices in modern probability theory. In fact, we show that we can construct some stochastic matrices by the Helmert matrix. Hence, we introduce a new class of regular and doubly stochastic matrices by use of the Helmert matrix and a special diagonal matrix that is defined in this article. Afterwards, we obtain the stationary distribution for this new class of stochastic matrices.


Introduction
In modern probability theory and dynamical systems, stochastic processes and Markov chains are applied contexts that are used in advanced sciences. Two basic topics in stochastic process are prediction and filtering. In the topic of prediction for Markov chains, we can obtain the -step transition probability, using the 1-step transition probability. This work is done by a matrix which is called the stochastic or probability or transition or Markov matrix. In stochastic processes or Markov chains, stochastic matrices are used for showing the transition probabilities [9,11]. On the other hand, there is a special matrix in liner algebra that is called the Helmert matrix. A Helmert matrix of order is a square matrix that was introduced by H. O. Lancaster in 1965 [4]. Usually, the Helmert matrix is used in mathematical statistics for analysis of variance (ANOVA), see [1,2,8]. In this article, we will show that the Helmert matrix can be used in stochastic processes. For the next sections, the following notation will be used: (a) denotes an identity matrix of order . (b) denotes an × matrix whose elements are all 1. (g) ℝ denotes real numbers.

Stochastic Matrix
Suppose that a stochastic process start from state to state . This transition shown by → , and denote its probability. Now, if process consist of states and denotes the state at time , then the transition → at time , is indicated by = and $ = . Furthermore, a process is called a Markov chain if the transition probability is independent of time for every states and of state space. Hence, the transition probability under the Markov property, is defined as:

Regular and Ergodic Markov Chains and Stationary Distribution
A nonnegative square matrix % is called regular if % ; > 0 for some m [6]. Since every transition matrix is nonnegative, hence we have a similar definition for Markov chains.

Definition 3. [7] A Markov chain is called a regular chain if some powers of the transition matrix has only positive elements (in other words, the transition matrix of chain be regular).
Regularity is an important property for Markov chains since it has a strong relationship with another important ergodicity property. We know that a Markov chain is ergodic if it is possible to go from every state to every state (not necessarily in one move). By [3, Theorem 1.8] we know that if a Markov transition matrix % is regular, then it has exactly one ergodic class and in general this process is ergodic. Hence, the following proposition shows the relationship between the regularity property and ergodicity one: Proposition 1. [3,6]  Since every regular chain is ergodic, if chain be regular, then stationary distribution exists.

The Helmert Matrix
The Helmert matrix is a square matrix of order that is defined as: Furthermore, we know that the Helmert matrix is orthogonal [1]: To prove the main theorems in the next section, we need the following proposition: Proposition 3. Let F be the Helmert matrix of order and Proof. By calculation we have Besides, by calculation, we have Hence, using (10) in (9), we obtain Right multiplying of both sides of (11) by F , we obtain The theorem is proved.

A New Class of Regular Doubly Stochastic Matrices Using the Helmert Matrix
Let us consider the matrix where F is the Helmert matrix of order Clearly, the matrix % is a diagonalizable matrix (note that in matrix theory, we know that if a square matrix of order such as can be equal to \ \ where \ is a orthogonal matrix and is a diagonal matrix, then is called a diagonalizable matrix (see [12])). Now, we shall prove that % is a doubly stochastic matrix. We need the following lemma.
Lemma 2. Let F be the Helmert matrix of order and , [ ≥ 0 such that at least or [ is positive. Then Clearly, all elements of the above matrix are positive. Now, we shall prove that each row of (14) sum to 1 So, both conditions of Definition 1 are holds for the matrix % = F F , hence this matrix is a stochastic matrix. On the other hand, we know that the matrix % is a symmetric matrix, because Thus, by Definition 2, it is a doubly stochastic matrix. Corollary 1. Let ' : ≥ 0) be the finite Markov chain by stochastic matrix % in Theorem 1. Then the transition probability for transition → is equal to Proof. We know that is the 4 , 6 -th element of stochastic matrix % . Therefore the proof is immediately.
Suppose that a square × doubly stochastic matrix such as v has the following form Clearly, since x = p and x = ( for ≠ ), then the matrix v is follow of new class, if p ≥ and the following system of equations hold: , then we know that p < , and this is in contradiction with (17)  When a matrix is diagonalizable, we can obtain its positive integer powers by its diagonal form. On the other hand, by Chapman-Kolmogorov equation [9,10,11], we know that the -step transition matrix is equal to the -th power of 1-step transition matrix. Hence, for the stochastic matrix v in Corollary 2, we have In the next part, we will use of (19) to compute the stationary distribution of the new class of stochastic matrices.

Stationary Distribution for the New Class of Regular Stochastic Matrices
We know that the Markov chain by stochastic matrix in the Theorem 1, is a regular and ergodic chain, since all elements in this matrix are positive and hence is possible to go from every state to every state. Now, in the next theorem we compute the stationary distribution for this chain. Proof. By Theorem 1 we know that all elements of stochastic matrix % are positive. So, the chain is regular. Besides, by Proposition 1, the chain must be ergodic. By Proposition 2, we know that if chain is ergodic and lim 5→∞ 456 exists, then E = lim 5→∞ 456 . So, we have We know that = P1 And by Lemma 3, we have Therefore E = .

Conclusion
Usually, we can working on many topics of probability theory, but working on stochastic processes and offering new Markov chains are less than other subjects. In this article we presented a new class of stochastic matrices. Since for every integer and every positive real numbers such as and [, we can construct a stochastic matrix of order by forming to % = Y$ Z 4 + [ 6, so the new class is very big. Also, we showed that % = F F where F is the well-known Helmert matrix and is a diagonal matrix of order . So, the stochastic matrix % is a diagonalizable matrix. In matrix theory, diagonalizable form of a square matrix is very important, because compute of determinant and also integer powers of matrix by this form is simpler than other usual methods. On the other hand, we know that integer powers of a stochastic matrix is important for prediction of its behavior. In addition, we showed that the Markov chain by the stochastic matrix % is regular and ergodic. Regular and ergodic properties are two important properties of stochastic processes, because the stationary distribution for any regular or ergodic Markov chain is equal to the limit of -th power of its transition matrix as → ∞. Furthermore, we proved that the stochastic matrix % is a doubly stochastic matrix.

Appendix: Doubly Stochastic Matrices that Their Inverses are Doubly Stochastic Matrices
We know that is a doubly stochastic matrix, since each row and column of sum to 1. Besides, = , which means that there is at least a doubly stochastic matrix of order such that its inverse is a doubly stochastic matrix. In 2015, R. Farhadian (the first author) by a Farsi language article titled "Approximation for stationary distribution of ergodic stochastic processes" published in "NEDA: Student Statistical Journal", showed that there exists some doubly stochastic matrices except identity matrix, such that their inverses are doubly stochastic matrices. Consider the following matrices of order 2 and 3: where F and F ƒ are orthogonal Helmert matrices of order 2 and order 3, respectively. We know that for a diagonalizable square matrix such as = \ \ , the inverse of is equal to = \ \ . Thus for inverses of • and • ƒ , we have