Mathematical Methods for Physics and Engineering : A Comprehensive Guide

(lu) #1

31.7 HYPOTHESIS TESTING


We now turn to Fisher’sF-test. Let us suppose that two independent samples

of sizesN 1 andN 2 are drawn from Gaussian distributions with means and


variancesμ 1 ,σ^21 andμ 2 ,σ 22 respectively, and we wish to distinguish between the


two hypotheses


H 0 :σ^21 =σ^22 and H 1 :σ^21 =σ^22.

In this case, the generalised likelihood ratio is found to be


λ=

(N 1 +N 2 )(N^1 +N^2 )/^2

N 1 N^1 /^2 N 2 N^2 /^2

[
F(N 1 −1)/(N 2 −1)

]N 1 / 2

[
1+F(N 1 −1)/(N 2 −1)

](N 1 +N 2 )/ 2 ,

whereFis given by the variance ratio


F=

N 1 s^21 /(N 1 −1)
N 2 s^22 /(N 2 −1)


u^2
v^2

(31.123)

ands 1 ands 2 are the standard deviations of the two samples. On plottingλas a


function ofF, it is apparent that the rejection regionλ<λcritcorresponds to a


two-tailed test onF. Nevertheless, as will shall see below, by defining the fraction


(31.123) appropriately, it is customary to make a one-tailed test onF.


The distribution ofFmay be obtained in a reasonably straightforward manner

by making use of the distribution of the sample variances^2 given in (31.122).


Under our null hypothesisH 0 , the two Gaussian distributions share a common


variance, which we denote byσ^2. Changing the variable in (31.122) froms^2 tou^2


we find thatu^2 has the sampling distribution


P(u^2 |H 0 )=

(
N− 1
2 σ^2

)(N−1)/ 2
1
Γ

( 1
2 (N−1)

)(u^2 )(N−3)/^2 exp

[

(N−1)u^2
2 σ^2

]
.

Sinceu^2 andv^2 are independent, their joint distribution is simply the product of


their individual distributions and is given by


P(u^2 |H 0 )P(v^2 |H 0 )=A(u^2 )(N^1 −3)/^2 (v^2 )(N^2 −3)/^2 exp

[

(N 1 −1)u^2 +(N 2 −1)v^2
2 σ^2

]
,

where the constantAis given by


A=

(N 1 −1)(N^1 −1)/^2 (N 2 −1)(N^2 −1)/^2
2 (N^1 +N^2 −2)/^2 σ(N^1 +N^2 −2)Γ

( 1
2 (N^1 −1)

)
Γ

( 1
2 (N^2 −1)

).
(31.124)

Now, for fixedvwe haveu^2 =Fv^2 andd(u^2 )=v^2 dF. Thus, the joint sampling

Free download pdf