Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
496 Optimal Tests of Hypotheses

Remark 8.3.2.We caution the reader on this last test for the equality of two
variances. In Remark 8.3.1, we discussed that the one- and two-samplet-tests for
means are asymptotically correct. The two-sample variance test of the last example
is not, however; see, for example, page 143 of Hettmansperger and McKean (2011).
If the underlying distributions are not normal, then theF-critical values may be
far from valid critical values (unlike thet-critical values for the means tests as
discussed in Remark 8.3.1). In a large simulation study, Conover, Johnson, and
Johnson (1981) showed that instead of having the nominal size ofα=0.05, the
F-test for variances using theF-critical values could have significance levels as high
as 0.80, in certain nonnormal situations. Thus the two-sampleF-test for variances
does not possess robustness of validity. It should only be used in situations where
the assumption of normality can be justified. See Exercise 8.3.17 for an illustrative
data set.

The corresponding likelihood ratio test for the variance of a normal distribution
based on one sample is discussed in Exercise 8.3.9. The cautions raised in Remark
8.3.1, hold for this test also.


Example 8.3.6.Let the independent random variablesXandYhave distributions
that areN(θ 1 ,θ 3 )andN(θ 2 ,θ 4 ). In Example 8.3.1, we derived the likelihood ratio
test statisticT of the hypothesisθ 1 =θ 2 whenθ 3 =θ 4 , while in Example 8.3.5
we obtained the likelihood ratio test statisticF of the hypothesisθ 3 =θ 4 .The
hypothesis thatθ 1 =θ 2 is rejected if the computed|T|≥c, where the constantcis
selected so thatα 2 =P(|T|≥c;θ 1 =θ 2 ,θ 3 =θ 4 ) is the assigned significance level of
the test. We shall show that, ifθ 3 =θ 4 , the likelihood ratio test statistics for equality
of variances and equality of means, respectivelyFandT, are independent. Among
other things, this means that if these two tests based onF andT, respectively,
are performed sequentially with significance levelsα 1 andα 2 , the probability of
accepting both these hypotheses, when they are true, is (1−α 1 )(1−α 2 ). Thus the
significance level of this joint test isα=1−(1−α 1 )(1−α 2 ).
Independence ofFandT,whenθ 3 =θ 4 , can be established using sufficiency
and completeness. The statistics X,Y,and


∑n
1 (Xi−X)

(^2) +∑n
1 (Yi−Y)
(^2) are
joint complete sufficient statistics for the three parametersθ 1 ,θ 2 ,andθ 3 =θ 4.
Obviously, the distribution ofFdoes not depend uponθ 1 ,θ 2 ,orθ 3 =θ 4 , and hence
F is independent of the three joint complete sufficient statistics. However,Tis a
function of these three joint complete sufficient statistics alone, and, accordingly,T
is independent ofF. It is important to note that these two statistics are independent
whetherθ 1 =θ 2 orθ 1
=θ 2. This permits us to calculate probabilities other than
the significance level of the test. For example, ifθ 3 =θ 4 andθ 1
=θ 2 ,then
P(c 1 <F <c 2 ,|T|≥c)=P(c 1 <F <c 2 )P(|T|≥c).
The second factor in the right-hand member is evaluated by using the probabilities
of a noncentralt-distribution. Of course, ifθ 3 =θ 4 and the differenceθ 1 −θ 2 is
large, we would want the preceding probability to be close to 1 because the event
{c 1 <F <c 2 ,|T|≥c}leads to a correct decision, namely, acceptθ 3 =θ 4 and
rejectθ 1 =θ 2.

Free download pdf