Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
398 Maximum Likelihood Methods

Theorem 6.5.1.LetX 1 ,...,Xnbe iid with pdff(x;θ)forθ∈Ω⊂Rp. Assume
the regularity conditions hold. Letθ̂nbe a sequence of consistent solutions of the
likelihood equation when the parameter space is the full spaceΩ.Let̂θ 0 ,n be a
sequence of consistent solutions of the likelihood equation when the parameter space
is the reduced spaceω, which has dimensionp−q.LetΛdenote the likelihood ratio
test statistic given in (6.5.4). UnderH 0 , (6.5.1),


−2logΛ
D
→χ^2 (q). (6.5.11)

A proof of this theorem can be found in Rao (1973).
There are analogs of the Wald-type and scores-type tests, also. The Wald-type
test statistic is formulated in terms of the constraints, which defineH 0 ,evaluated
at the mle under Ω. We do not formally state it here, but as the following example
shows, it is often a straightforward formulation. The interested reader can find a
discussion of these tests in Lehmann (1999).
A careful reading of the development of this chapter shows that much of it
remains the same ifXis a random vector. The next example demonstrates this.
Example 6.5.2(Application of a Multinomial Distribution).As an example, con-
sider a poll for a presidential race withkcandidates. Those polled are asked to
select the person for which they would vote if the election were held tomorrow. As-
suming that those polled are selected independently of one another and that each
can select one and only one candidate, the multinomial model seems appropriate.
In this problem, suppose we are interested in comparing how the two “leaders” are
doing. In fact, say the null hypothesis of interest is that they are equally favorable.
This can be modeled with a multinomial model that has three categories: (1) and
(2) for the two leading candidates and (3) for all other candidates. Our observa-
tion is a vector (X 1 ,X 2 ), whereXiis 1 or 0 depending on whether categoryiis
selected or not. If both are 0, then category (3) has been selected. Letpidenote the
probability that categoryiis selected. Then the pmf of (X 1 ,X 2 ) is the trinomial
density,
f(x 1 ,x 2 ;p 1 ,p 2 )=px 11 px 22 (1−p 1 −p 2 )^1 −x^1 −x^2 , (6.5.12)


forxi=0, 1 ,i=1,2;x 1 +x 2 ≤1, where the parameter space is Ω ={(p 1 ,p 2 ): 0<
pi< 1 ,p 1 +p 2 < 1 }. Suppose (X 11 ,X 21 ),...,(X 1 n,X 2 n) is a random sample from
this distribution. We shall consider the hypotheses


H 0 :p 1 =p 2 versusH 1 : p 1
=p 2. (6.5.13)

We first derive the likelihood ratio test. LetTj=

∑n
i=1Xjiforj=1,2. From
Example 6.4.5, we know that the maximum likelihood estimates arep̂j=Tj/n,for
j=1,2. The value of the likelihood function (6.4.21) at the mles under Ω is

L

(
Ωˆ

)
=ˆpn 1 pˆ^1 pˆ 2 npˆ^2 (1−pˆ 1 −ˆp 2 )n(1−pˆ^1 −pˆ^2 ).

Under the null hypothesis, letpbe the common value ofp 1 andp 2. The pmf of
(X 1 ,X 2 )is


f(x 1 ,x 2 ;p)=px^1 +x^2 (1− 2 p)^1 −x^1 −x^2 ; x 1 ,x 2 =0,1;x 1 +x 2 ≤ 1 , (6.5.14)
Free download pdf