Minimax Estimation of the Parameter of ЭРланга Distribution Under Different Loss Functions
Lanping Li
Department of Basic Subjects, Hunan University of Finance and Economics, Changsha, China
Email address:
To cite this article:
Lanping Li. Minimax Estimation of the Parameter of ЭРланга Distribution Under Different Loss Functions. Science Journal of Applied Mathematics and Statistics. Vol. 4, No. 5, 2016, pp. 229-235. doi: 10.11648/j.sjams.20160405.16
Received: August 31, 2016; Accepted: September 12, 2016; Published: October 8, 2016
Abstract:The aim of this article is to study the estimation of the parameter of ЭРланга distribution based on complete samples. The Bayes estimators of the parameter of ЭРланга distribution are obtained under three different loss functions, namely, weighted square error loss, squared log error loss and entropy loss functions by using conjugate prior inverse Gamma distribution. Then the minimax estimators of the parameter are derived by using Lehmann’s theorem. Finally, performances of these estimators are compared in terms of risks which obtained under squared error loss function.
Keywords: Bayes Estimator, Minimax Estimator, Squared Log Error Loss Function, Entropy Loss Function
1. Introduction
In reliability and supportability data analysis field, the most commonly used distribution are the exponential distribution, normal distribution and Weibull distribution, etc. But in some practical application, such as the repair time, guarantee the distribution delay time, the above several distributions does not just as one wish. At this time ЭРланга distribution was proposed as a suitable alternative distribution [1].
Suppose that the repair time obeys the ЭРланга distribution with the following probability density function (pdf) and distribution function respectively:
(1)
(2)
Here, is the unknown parameter. It is easily to see that
, and then the parameter
is also often referred to as the mean time to repair equipment.
Lv et al. [1] studied the characteristic parameters, such as mean, variance and median and the maximum likelihood estimation of ЭРланга distribution was also derived. Pan et al. [2] studied the interval estimation and hypothesis test of ЭРланга distribution based on small sample, and the difference of exponential distribution with З рланга distribution was also discussed. Long [3] studied the estimation of the parameter of З рланга distribution based on missing data. Yu et al. [4] used the Эрланга distribution to fit the battlefield injury degree, and established the simulating model, then proposed a new method to solve the problem in the production and distribution of battlefield injury in campaign macrocosm. Long [5] studied the Bayes estimation of Эрлангa distribution under type-II censored samples on the basis conjugate prior, Jeffreys prior and no information prior distributions.
The minimax estimation was introduced by Abraham Wald in 1950, and then minimax approach has received great attention and application many aspects by researchers [6-9]. Minimax estimation is one of the most aspect in statistical inference field. Under quadratic and MLINEX loss functions, The references [10-13] studied the minimax estimation of the Weibull distribution, Pareto distribution and Rayleigh distributions and Minimax distribution, respectively. Rasheed and Al-Shareefi [14] discussed the minimax estimation of the scale parameter of Laplace distribution under squared-log error loss function. Li [15] studied the minimax estimation of the parameter of exponential distribution based on record values. Li [16] obtained the minimax estimators of the parameter of Maxwell distribution under different loss functions.
The purpose of this paper is to study maximum likelihood estimation (MLE) and Bayes estimation of the parameter of ЭРланга distribution. Further, by using Lehmann’s theorem we derive minimax estimators under three loss functions, namely, weighted squared error loss, squared log error loss and entropy loss functions.
2. Maximum Likelihood Estimation
Let be a sample drawn from ЭРланга distribution with pdf (1), and
is the observation of
. For given sample observation, we can get the likelihood function of the parameter
as follows:
(3)
That is
(4)
Here is the observation of
.
Then the log-likelihood function is
By solving log-likelihood equation
,
the maximum likelihood estimator of can be easily derived as follows:
(5)
And by Eq. (1), we can easily show that is a random variable distributed with the Gamma distribution
, which has the following probability density function:
(6)
3. Bayesian Estimation
In Bayesian statistical analysis, loss function plays an important role in the Bayes estimation and Bayes test problems. Many loss function are proposed in Bayesian analysis, and squared error loss function is the most common loss function, which is a symmetric loss function. In many practical problems, especially in the estimation of reliability and failure rates, symmetric loss may be not suitable, because it is to be thought the overestimation will bring more loss than underestimation [17]. Then some asymmetric loss functions are developed. For example, Zellner [18] proposed the LINEX loss in Bayes estimation, Brown [19] put forward the squared log error loss function for estimating unknown parameter, Dey et al. [20] proposed the entropy loss function in the Bayesian analysis.
In this paper, we will discuss the Bayes estimation of the unknown parameter of ЭРланга distribution under the following loss functions:
(i) Weighted squared error loss function
(7)
Under the weighted squared error loss function (7), the Bayes estimator of is
(8)
(ii) Squared log error loss function
Squared log error loss function is a asymmetric loss function, which first proposed by Brown for estimating scale parameter. This loss function can also be found in Kiapoura and Nematollahib [21] with the following form:
(9)
Obviously, as
or
. The loss function (9) is not always convex, and it is convex for
and concave otherwise. But the risk function of this function has minimum value, which we also call it the Bayes estimator
under squared log error loss function. That is
(10)
(iii) Entropy loss function
In many practical situations, it appears to be more realistic to express the loss in terms of the ratio . In this case, Dey et al. [20] pointed out a useful asymmetric loss function named entropy loss function:
(11)
Whose minimum occurs at. Also, this loss function has been used in Singh et al. [22], Nematollahi and Motamed-Shariati [23]. The Bayes estimator under the entropy loss (11) is denoted by
, obtained by
(12)
In this section, we will estimate the unknown parameter on the basis of the above three mentioned loss functions. We further assume that some prior knowledge about the parameter
is available to the investigation from past experience with the ЭРланга model. The prior knowledge can often be summarized in terms of the so-called prior densities on parameter space of
. In the following discussion, we assume the following Jeffrey’s non-informative quasi-prior density defined as,
(13)
Hence, leads to a diffuse prior and
to a non-informative prior.
Let be a sample drawn from ЭРланга distribution with pdf (1), and
is the observation of
. Combining the likelihood function (3) with the prior density (13), the posterior probability density of
can be derived using Bayes Theorem as follows
(14)
Theorem 1. Let be a sample of Э Рланга distribution with probability density function (1), and
is the observation of
.
is the observation of
Then
(i) Under the weighted square error loss function (7), the Bayes estimator is
(15)
(ii) The Bayes estimator under the squared log error loss function (9) is
(16)
(iii) The Bayes estimator under the entropy loss function (11) is
(17)
Proof. (i) Form Equation (14), it is obviously concluded that the posterior distribution of the parameter is Gamma distribution
.
That us
,
Then
(18)
Thus, the Bayes estimator under the weighted square error loss function (7) is derived as
For the case (ii): By using (14),
Where
is a Digamma function.
Then the Bayes estimator under the squared log error loss function (9) is come out to be
(iii) By Eqs. (12) and (17), the Bayes estimator under the entropy loss function (11) is given by
4. Minimax Estimation of ЭРланга Distribution
This section will derive the minimax extimators of Э Рланга Distribution by using Lehmann’s Theorem, which depends on specific prior distribution and loss functions of a Bayesian method. The Lehmann’s Theorem is stated as follows:
Lemma 1 Let be a class distribution functions and
be the estimators of
. Suppose that
is a Bayes estimator, which derived on the basis of a prior distribution
on
. Then if the Bayes risk function
equals constant on
, then
is a minimax estimator of
.
Theorem 2 Let be a sample drawn from ЭРланга distribution with pdf (1), and
is the observation of
. Suppose that
is the observation of the statistics
Then
(i) Under the weighted square error loss function (7), is the minimax estimator of parameter
(ii) Under the squared log error loss function, is the minimax estimator of parameter
(iii) Under the entropy loss function, is the minimax estimator of parameter
Proof. To use Lehmann’s Theorem for the proof of the results. We need calculate the risk function of Bayes estimators and prove these risk functions are constants.
For the case (i), we can derive the risk function of the Bayes estimator under the weighted square error loss function (7) as follows:
From equation (6), we have , then we have
Consequently,
Then, for Bayes estimator , the risk function
is a constant on the parameter
So, According to Lemma 1,
is the minimax estimator of parameter
under the weighted square error loss function (7).
For the case (ii). The risk function of the Bayes estimator is
By , we can easily get the result
Then
Let , then we can prove that
.
The derivative of is
Then
From above results, we can the fact
Further, we have
Then is also a constant about the parameter
. So, according to Lemma 1, we know that,
is a minimax estimator for parameter
under the squared log error loss function.
For the case (iii). The risk function of the Bayes estimator can be obtained as follows:
Then is also a constant about the parameter
. So, according to lemma 1, we know that,
is a minimax estimator for the parameter
under the entropy loss function.
5. Performances of Bayes Estimators
To illustrate the performance of these Bayes estimators, squared error loss function is used as a loss function to compare them. We note
and
are the risk functions of estimators
and
relative to the squared error loss, respectively. They can be easily derived as follows:
,
Let ,
and
are the ratio of the risk functions to
, which are plotted in Figs. 1-4 with different sample sizes, (n=10, 20, 30, 50)
Figure 1. Performance of estimators with n=10.
Figure 2. Performance of estimators with n=20.
Figure 3. Performance of estimators with n=30.
Figure 4. Performance of estimators with n=50.
From Figure 1 to Figure 4, we know that no of these estimators is uniformly better that other estimators. Then in practice, we recommend to select the estimator according to the prior parameter value d when assuming the quasi-prior as the prior distribution.
6. Conclusion
This paper derived Bayes estimators of the parameter of Э Рланга distribution under weighted squared error loss, squared log error loss and entropy loss functions. Mote Carlo simulations show that the risk functions of these estimators, defined under squared error loss function, are all decrease as sample size n increases. The risk functions more and more close to each other aehen the sample size n is large, such as n>50.
Acknowledgement
This study is partially supported by Natural Science Foundation of Hunan Province (No. 2015JJ3030) and Foundation of Hunan Educational Committee (No. 15C0228). The author also gratefully acknowledge the helpful comments and suggestions of the reviewers, which have improved the presentation.
References