Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
400 Maximum Likelihood Methods

Returning to the polling situation discussed at the beginning of this example, we
would say the race is too close to call if 0 is in this confidence interval.
Equivalently, the test can be based on the test statisticz=


χ^2 W, which has
an asymptoticN(0,1) distribution underH 0. This form of the test and the confi-
dence interval forp 1 −p 2 are computed by the R functionp2pair.R,whichcanbe
downloaded at the site mentioned in the Preface.
Example 6.5.3(Two-Sample Binomial Proportions).In Example 6.5.2, we devel-
oped tests forp 1 =p 2 based on a single sample from a multinomial distribution.
Now consider the situation whereX 1 ,X 2 ,...,Xn 1 is a random sample from ab(1,p 1 )
distribution,Y 1 ,Y 2 ,...,Yn 2 is a random sample from ab(1,p 2 ) distribution, and the
XisandYjs are mutually independent. The hypotheses of interest are


H 0 :p 1 =p 2 versusH 1 : p 1
=p 2. (6.5.22)

This situation occurs in practice when, for instance, we are comparing the pres-
ident’s rating from one month to the next. The full and reduced model param-
eter spaces are given respectively by Ω = {(p 1 ,p 2 ):0<pi< 1 ,i=1, 2 }and
ω={(p, p):0<p< 1 }. The likelihood function for the full model simplifies to


L(p 1 ,p 2 )=p 1 n^1 x(1−p 1 )n^1 −n^1 xpn 22 y(1−p 2 )n^2 −n^2 y. (6.5.23)

It follows immediately that the mles ofp 1 andp 2 arexandy, respectively. Note,
for the reduced model, that we can combine the samples into one large sample from
ab(n, p) distribution, wheren=n 1 +n 2 is the combined sample size. Hence, for
the reduced model, the mle ofpis


̂p=

∑n 1
i=1xi+

∑n 2
i=1yi
n 1 +n 2
=

n 1 x+n 2 y
n
, (6.5.24)

i.e., a weighted average of the individual sample proportions. Using this, the reader
is asked to derive the LRT for the hypotheses (6.5.22) in Exercise 6.5.12. We next
derive the Wald-type test. Letp̂ 1 =xand̂p 2 =y. From the Central Limit Theorem,
we have √
ni(̂pi−pi)

pi(1−pi)


→DZ
i,i=1,^2 ,

whereZ 1 andZ 2 are iidN(0,1) random variables. Assume fori=1,2that,as
n→∞,ni/n→λi,where0<λi<1andλ 1 +λ 2 = 1. As Exercise 6.5.13 shows,



n[(̂p 1 −̂p 2 )−(p 1 −p 2 )]→DN

(
0 ,

1
λ 1

p 1 (1−p 1 )+

1
λ 2

p 2 (1−p 2 )

)

. (6.5.25)


It follows that the random variable


Z=

(̂p 1 −̂p 2 )−(p 1 −p 2 )

p 1 (1−p 1 )
n 1 +

p 2 (1−p 2 )
n 2

(6.5.26)

has an approximateN(0,1) distribution. UnderH 0 ,p 1 −p 2 =0. WecoulduseZ
as a test statistic, provided we replace the parametersp 1 (1−p 1 )andp 2 (1−p 2 )

Free download pdf