Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
8.3. Likelihood Ratio Tests 487

8.2.12.LetXhave the pdff(x;θ)=θx(1−θ)^1 −x,x=0,1, zero elsewhere. We
testH 0 :θ=^12 againstH 1 :θ<^12 by taking a random sampleX 1 ,X 2 ,...,X 5 of
sizen= 5 and rejectingH 0 ifY=


∑n
1 Xiis observed to be less than or equal to a
constantc.


(a)Show that this is a uniformly most powerful test.

(b)Find the significance level whenc=1.

(c)Find the significance level whenc=0.

(d)By using arandomized test, as discussed in Example 4.6.4, modify the tests
given in parts (b) and (c) to find a test with significance levelα= 322.

8.2.13.LetX 1 ,...,Xndenote a random sample from a gamma-type distribution
withα=2andβ=θ.LetH 0 :θ=1andH 1 :θ>1.


(a)Show that there exists a uniformly most powerful test forH 0 againstH 1 ,
determine the statisticYupon which the test may be based, and indicate the
nature of the best critical region.

(b)Find the pdf of the statisticYin part (a). If we want a significance level of
0.05, write an equation that can be used to determine the critical region. Let
γ(θ),θ≥1, be the power function of the test. Express the power function as
an integral.

8.2.14.Show that the mlr test defined by expression (8.2.3) is an unbiased test for
the hypotheses (8.2.1).


8.3 LikelihoodRatioTests..........................

In the first section of this chapter, we presented the most powerful tests for sim-
ple versus simple hypotheses. In the second section, we extended this theory to
uniformly most powerful tests for essentially one-sided alternative hypotheses and
families of distributions that have a monotone likelihood ratio. What about the
general case? That is, suppose the random variableX has pdf or pmff(x;θ),
whereθis a vector of parameters in Ω. Letω⊂Ω and consider the hypotheses


H 0 : θ∈ωversusH 1 :θ∈Ω∩ωc. (8.3.1)

There are complications in extending the optimal theory to this general situation,
which are addressed in more advanced books; see, in particular, Lehmann (1986).
We illustrate some of these complications with an example. SupposeX has a
N(θ 1 ,θ 2 ) distribution and that we want to testθ 1 =θ′ 1 ,whereθ′ 1 is specified. In
the notation of (8.3.1),θ=(θ 1 ,θ 2 ), Ω ={θ:−∞<θ 1 <∞,θ 2 > 0 },and
ω={θ:θ 1 =θ′ 1 ,θ 2 > 0 }.NoticethatH 0 :θ∈ωis a composite null hypothesis.
LetX 1 ,...,Xnbe a random sample onX.
Assume for the moment thatθ 2 is known. ThenH 0 becomes the simple hypoth-
esisθ 1 =θ 1 ′. This is essentially the situation discussed in Example 8.2.3. There

Free download pdf