Minimax Estimation of the Parameter of Maxwell Distribution Under Different Loss Functions
Department of Basic Subjects, Hunan University of Finance and Economics, Changsha, China
To cite this article:
Lanping Li. Minimax Estimation of the Parameter of Maxwell Distribution Under Different Loss Functions. American Journal of Theoretical and Applied Statistics. Vol. 5, No. 4, 2016, pp. 202-207. doi: 10.11648/j.ajtas.20160504.16
Received: May 21, 2016; Accepted: June 6, 2016; Published: June 23, 2016
Abstract: The aim of this article is to study the Bayes estimation and minimax estimation of the parameter of Maxwell distribution. Bayes estimators are obtained with non-informative quasi-prior distribution under different loss functions, namely, weighted squared error loss, squared log error loss and entropy loss functions. Then the minimax estimators of the parameter are obtained by using Lehmann’s theorem. Finally, performances of these estimators are compared in terms of risks.
Keywords: Bayes Estimator, Minimax Estimator, Squared Log Error Loss, Entropy Loss, Maxwell Distribution
The Maxwell distribution was first introduced by Maxwell in 1860, and since then, the study and application of Maxwell distribution have been received great attention. In 1989, Tyagi and Bhattacharya  firstly used Maxwell distribution to model products’ life, and they obtained the minimum variance unbiased estimator and Bayes estimator of the parameter and reliability. Chaturvedi and Rani  studied Bayesian reliability estimation of the generalized Maxwell failure distribution. Podder and Roy  studied the estimation of the parameter of this distribution under modified linear exponential loss (MLINEX) function. Bekker and Roux  discussed the maximum likelihood estimator (MLE), Bayes estimators of the truncated first moment and hazard function of the Maxwell distribution. Dey and Maiti  derived Bayes estimators of Maxwell distribution by considering non-informative and conjugate prior distributions under three loss functions, namely, quadratic loss function, squared-log error loss function and MLINEX function. The references [6-8] studied the reliability estimation of Maxwell distribution based on Type-II censored sample, progressively Type-II censored sample and random censored sample, respectively. Reference  obtained some Bayes estimators under quadratic loss function using non-informative prior, represented by Jefferys prior and Informative priors as Gumbel Type II and Conjugate (Inverted Gamma and Inverted Levy) priors. More details about Maxwell distribution can be found in references [10-12].
This paper is devoted to the minimax estimation problem of the unknown scale parameterin the maxwell distribution with the following probability density function (pdf) (Dey and Maiti ):
The minimax estimation is an important part in the statistical estimation area, which was introduced by Abraham Wald . The minimax estimation theory has drawn great attention by many researchers. Roy et al.  studied the minimax estimation of the scale parameter of the Weibull distribution for quadratic and MLINEX loss functions. Podder et al.  derived the minimax estimator of the parameter of the Pareto distribution under Quaratic and MLINEX loss functions. Dey  discussed the minimax estimator of the parameter of Rayleigh distribution. Shadrokh and Pazira  studied the minimax estimator of the parameter of the minimax distribution under several loss functions.
The purpose of this paper is to study maximum likelihood estimation (MLE) and Bayes estimation of the parameter of Maxwell distribution. Further, by using Lehmann’s theorem we derive minimax estimators under three loss functions, namely, weighted squared error loss, squared log error loss and entropy loss functions.
2. Preliminary Knowledge
2.1. Maximum Likelihood Estimation
Let be a sequence of independent and identically distributed (i. i. d.) random variables of Maxwell distribution with pdf (1), and is the observation of . The likelihood function of for the given sample observation is
where is the observation of .
By solving log likelihood equation, the maximum likelihood estimator of is easily derived as follows:
And by Eq. (1), we can easily show that is distributed the Gamma distribution , which has the following probability density function (, p. 283):
Here, is the Gamma function.
2.2. Loss Function
Loss function plays an important role in Bayesian analysis and the most common loss are symmetric loss function, especially squared error loss function are considered most. Under squared loss function, it is to be thought the overestimation and underestimation have the same estimated risks. However, in many practical practical problems, overestimation and underestimation will have different consequences. To overcome this difficulty, Zellner  proposed an asymmetric loss function known as the LINEX loss function, Brown proposed a new asymmetric loss function for scale parameter estimation in 1968, which is called squared log error loss (Kiapoura and Nematollahib ):
Which is balanced and as or . The risk function has minimum at the following Bayes estimator:
In many practical situations, it appears to be more realistic to express the loss in terms of the ratio . In this case, Dey et al.  pointed out that a useful asymmetric loss function is entropy loss function:
The Bayes estimator under the entropy loss is denoted by, given by
3. Bayesian Estimation
In this section, we estimate by considering weighted square error loss, squared log error loss and entropy loss functions.
We further assume that some prior knowledge about the parameter is available to the investigation from past experience with the Maxwell model. The prior knowledge can often be summarized in terms of the so-called prior densities on parameter space of . In the following discussion, we assume the following Jeffrey’s non-informative quasi-prior density defined as,
Hence, leads to a diffuse prior and to a non-informative prior.
Combing the likelihood function (3) and the prior density (10), we obtain the posterior density of is
This is a Gamma distribution .
Theorem 1. Let be a sample of Maxwell distribution with pdf (1), and is the observation of .is the observation of .
(i) Under the weighted square error loss function
the Bayes estimator is given by
(ii) The Bayes estimator under the squared log error loss function is come out to be
is a Digamma function.
(iii) The Bayes estimator under the entropy loss function is obtained as
(i) By formula (11), we know that
Thus, the Bayes estimator under the weighted square error loss function is given by
For the case
(ii) By using (11),
Then the Bayes estimator under the squared log error loss function is come out to be
(iii) By Eqs. (10) and (17), the Bayes estimator under the entropy loss function is given by
4. Mimimax Estimation
The most important elements in the minimax approach are the specification of the prior distribution and the loss functions by using a Bayesian method. The derivation of minimax estimators depends primarily on a theorem due to Lehmann which can be stated as follows:
Lemma 1 (Lehmann’s Theorem) If be a family of distribution functions and D a class of estimators of. Suppose that is a Bayes estimator against a prior distribution on , and the risk function equals constant on ; then is a minimax estimator of .
Theorem 2 Let be a sample of Maxwell distribution with probability density function (1), then
(i) is the minimax estimator of the parameter for the weighted square error loss function.
(ii) is the minimax estimator of the parameter for the squared log error loss function.
(iii) is the minimax estimator of the parameter for the entropy loss function.
Proof. To prove the theorem we shall use Lehmann’s theorem, which has been stated before. Then we have to find the Bayes estimator of . Thus if we can show that the risk of is constant, then the theorem 2 will be proved.
(i) The risk function of the estimator is
From the conclusion , we have
Then is a constant. So, according to the Lehmann’s theorem it follows that, is the minimax estimator for the parameter of the Maxwell distribution under the weighted square error loss function.
Now we are going to prove the case
(ii) The risk function of the estimator is
From the conclusion , we have
Using the fact
where . and we can show that
Then we get the fact
Then is a constant. So, according to the Lehmann’s theorem it follows that, is the minimax estimator for the parameter of the Maxwell distribution under the squared log error loss function.
Finally we are going to prove the case
(iii) The risk function of the estimator is
Then is a constant. So, according to the Lehmann’s theorem it follows that, is the minimax estimator for the parameter of the Maxwell distribution under the entropy loss function.
5. Risk Function
The risk functions of the estimators and relative to squared error loss are denoted by and , respectively, are can be easily shown as
The Figs. 1-4 have plotted the ratio of the risk functions to , i.e.
From Figs. 1-4, it is clear that no of the estimators uniformly dominates any other. We therefore recommend choose the estimators according to the value of d when the quasi-prior density is used as the prior distribution.
In this paper, we have derived Bayes estimators of the parameter of Maxwell distribution under weighted squared error loss, squared log error loss and entropy loss functions. Simulation results show that the risk functions of the estimators and relative to squared error loss decrease as sample size increases. When the sample size n is large, such as n>50, the risks are almost the same.
This study is partially supported by Natural Science Foundation of Hunan Province (No. 2015JJ3030) and Foundation of Hunan Educational Committee (No. 15C0228). The author also gratefully acknowledge the helpful comments and suggestions of the reviewers, which have improved the presentation.