Holooly Plus Logo

Question 9.1: Consider an i.i.d. sample of size n from a Po(λ) distributed......

Consider an i.i.d. sample of size n from a Po(λ) distributed random variable X.

(a) Determine the maximum likelihood estimate for λ.
(b) What does the log-likelihood function look like for the following realizations: x_{1} = 4, x_{2} = 3, x_{3} = 8, x_{4} = 6, x_{5} = 6? Plot the function using R. Hint: The curve command can be used to plot functions.
(c) Use the Neyman–Fisher Factorization Theorem to argue that the maximum likelihood estimate obtained in (a) is a sufficient statistic for λ.

Step-by-Step
The 'Blue Check Mark' means that this solution was answered by an expert.
Learn more on how do we answer questions.

(a) The exercise tells us that X_{i} \overset{iid}{\sim }  Po(λ), i = 1, 2, . . . , n. Let us look at the realizations x_{1}, x_{2}, . . . , x_{n}: under the assumption of independence, which we know is fulfilled because the X_{i} ’s are i.i.d., and we can write the likelihood function as the product of the n PMF’s of the Poisson distribution:

L\left(\theta ;x\right) =\prod\limits_{i=1}^{n}{f\left(x_{i};\theta \right) }= \prod\limits_{i=1}^{n}{\frac{\lambda ^{x_{i}}}{x_{i}!}e^{-\lambda } }=\frac{\lambda ^{\Sigma x_{i}}}{\prod{x_{i}!} } e^{-n\lambda }.

It is better to work on a log-scale because it is easy to differentiate. The results are identical no matter whether we use the likelihood function or the log-likelihood function because the log transformation is monotone in nature. The log-likelihood function is:

\ln L=\Sigma x_{i}\ln \lambda -\ln \left(x_{1}! \ldots x_{n}!\right)-n\lambda .

Differentiating with respect to λ yields

\frac{\partial \ln L}{\partial \lambda }=\frac{1}{\lambda }\Sigma x_{i}-n \overset{!}{=} 0

which gives us the ML estimate:

\hat{\lambda } =\frac{1}{n}\Sigma x_{i}=\bar{x}.

We need to confirm that the second derivative is < 0 at \hat{\lambda }= \bar{x}; otherwise, the solution would be a minimum rather than a maximum. We get

\frac{\partial^{2} \ln L}{\partial \lambda ^{2}}=-\frac{1}{\hat{\lambda }^{2} }\Sigma x_{i}=-\frac{n}{\hat{\lambda }}\lt 0 .

It follows that the arithmetic mean \bar{x} = \hat{\lambda } is the maximum likelihood estimator for the parameter λ of a Poisson distribution.

(b) Using the results from (a) we can write the log-likelihood function for x_{1}= 4, x_{2} = 3, x_{3} = 8, x_{4} = 6, x_{5} = 6 as:

ln L = 27 ln λ − ln(4! 3! 8! 6! 6!) − 5λ.

because \Sigma x_{i} = 27. We can write down this function in R as follows:

MLP <- function(lambda){
27∗log(lambda) – log(factorial(4)∗…∗factorial(6)) –
5∗lambda
}

The function can be plotted using the curve command:

curve(MLP, from=0, to=10)

Figure B.19 shows the log-likelihood function. It can be seen that the function reaches its maximum at \bar{x} = 5.4.

(c) Using (a) we can write the likelihood function as

L\left(\theta ;x\right) =\prod\limits_{i=1}^{n}{f\left(x_{i};\theta \right) }= \prod\limits_{i=1}^{n}{\frac{\lambda ^{x_{i}}}{x_{i}!}e^{-\lambda } }=\frac{\lambda ^{\Sigma x_{i}}}{\prod{x_{i}!} } e^{-n\lambda }=\underbrace{\lambda^{\Sigma x_{i}}e^{-n\lambda }}_{g\left(t,\lambda \right) } \frac{1}{\underbrace{\prod{x_{i}!} }_{h\left(x_{1}, \ldots , x_{n}\right) } }.

This means T = \Sigma ^{n}_{i=1}x_{i} is sufficient for λ. The arithmetic mean, which is the maximum likelihood estimate, is a one-to-one function of T and therefore sufficient too.

4

Related Answered Questions

Question: 9.5

Verified Answer:

(a) The point estimate of μ is \bar{x}[/la...