Holooly Plus Logo

Question 9.4: Let X1, X2, . . . , Xn be n i.i.d. random variables which fo......

Let X_{1}, X_{2}, . . . , X_{n} be n i.i.d. random variables which follow an exponential distribution. An intelligent statistician proposes to use the following two estimators to estimate μ = 1/λ:

(i) T_{n}(X) = nX_{min} with X_{min} = min(X_{1}, . . . , X_{n}) and X_{min} ∼ Exp(nλ),
(ii) V_{n}(X) = n^{−1}\Sigma ^{n}_{i=1}X_{i}.

(a) Are both T_{n}(X) and V_{n}(X) (asymptotically) unbiased for μ?
(b) Calculate the mean squared error of both estimators. Which estimator is more efficient?
(c) Is V_{n}(X) MSE consistent, weakly consistent, both, or not consistent at all?

Step-by-Step
The 'Blue Check Mark' means that this solution was answered by an expert.
Learn more on how do we answer questions.

(a) T_{n}(X) is unbiased, and therefore also asymptotically unbiased, because

E\left(T_{n}\left(X\right) \right) =E\left(nX_{min}\right) \overset{\left(7.29\right) }{=} n\frac{1}{n\lambda } =\frac{1}{\lambda }=\mu .

Similarly, V_{n}(X) is unbiased, and therefore also asymptotically unbiased, because

E\left(V_{n}\left(X\right) \right)=E \left(\frac{1}{n} \sum\limits_{i=1}^{n}{X_{i}} \right) \overset{\left(7.29\right) }{=} \frac{1}{n} \sum\limits_{i=1}^{n}{E\left(X_{i}\right) }=\frac{1}{n}n\frac{1}{\lambda } =\mu .

(b) To calculate the MSE we need to determine the bias and the variance of the estimators as we know from Eq. (9.5). It follows from (a) that both estimators are unbiased and hence the bias is 0. For the variances we get:

MSE_{\theta }\left(T\left(X\right) \right)= Var_{\theta }\left(T\left(X\right) \right)+\left[Bias_{\theta }\left(T\left(X\right) \right)\right] ^{2}.  (9.5)

Var\left(T_{n}\left(X\right)\right) = Var\left(nX_{min}\right) \overset{\left(7.33\right) }{=} n^{2}Var\left(X_{min}\right) =n^{2} \frac{1}{n^{2}\lambda^{2}} =\mu^{2}.

Var\left(V_{n}\left(X\right) \right) =Var\left(\frac{1}{n} \sum\limits_{i=1}^{n}{X_{i}} \right) \overset{\left(7.33\right) }{=} \frac{1}{n^{2}} \sum\limits_{i=1}^{n}{Var\left(X_{i}\right) } =\frac{1}{n^{2}} n\frac{1}{\lambda^{2} }=\frac{1}{n} \mu ^{2}.

Since the mean squared error consists of the sum of the variance and squared bias, the MSE for T_{n}(X) and V_{n}(X) are μ² and n^{−1}μ², respectively. One can see that the larger n, the more superior V_{n}(X) over T_{n}(X) in terms of the mean squared error. In other words, V_{n}(X) is more efficient than T_{n}(X) because its variance is lower for any n > 1.

(c) Using the results from (b), we get

\underset{n\rightarrow \infty}{ lim} MSE\left(V_{n}\left(X\right) \right) =\underset{n\rightarrow \infty}{ lim}\frac{1}{n} \mu ^{2}=0.

This means the MSE approaches 0 as n tends to infinity. Therefore, V_{n}(X) is MSE consistent for μ. Since V_{n}(X) is MSE consistent, it is also weakly consistent.

Related Answered Questions

Question: 9.5

Verified Answer:

(a) The point estimate of μ is \bar{x}[/la...