STATISTICS
properties and which reduces to the Neyman–Pearson statistic (31.108) in the
special case whereH 0 andH 1 are both simple hypotheses.
Consider the quite general, and commonly occurring, case in which the
data samplex is drawn from a populationP(x|a) with a known (or as-
sumed) functional form but depends on the unknown values of some parameters
a 1 ,a 2 ,...,aM. Moreover, suppose we wish to test the null hypothesisH 0 that
the parameter valuesalie in some subspaceS of the full parameter space
A. In other words, on the basis of the samplexit is desired to test the
null hypothesisH 0 :(a 1 ,a 2 ,...,aMlies inS) against the alternative hypothesis
H 1 :(a 1 ,a 2 ,...,aMlies inS), whereSisA−S.
Since the functional form of the population is known, we may write down the
likelihood functionL(x;a) for the sample. Ordinarily, the likelihood will have
a maximum as the parametersaare varied over the entire parameter spaceA.
This is the usual maximum-likelihood estimate of the parameter values, which
we denote byaˆ. If, however, the parameter values are allowed to vary only over
the subspaceSthen the likelihood function will be maximised at the pointˆaS,
which may or may not coincide with the global maximumaˆ. Now, let us take as
our test statistic thegeneralised likelihood ratio
t(x)=
L(x;ˆaS)
L(x;aˆ)
, (31.109)
whereL(x;aˆS) is the maximum value of the likelihood function in the subspace
SandL(x;ˆa) is its maximum value in the entire parameter spaceA.Itisclear
thattis a function of the sample values only and must lie between 0 and 1.
We will concentrate on the special case whereH 0 is the simple hypothesis
H 0 :a=a 0. The subspaceS then consists of only the single pointa 0. Thus
(31.109) becomes
t(x)=
L(x;a 0 )
L(x;aˆ)
, (31.110)
and the sampling distributionP(t|H 0 ) can be determined (in principle). As in the
previous subsection, the best rejection region for a given significanceαis simply
t<tcrit, where the valuetcritdepends onα. Moreover, as before, an equivalent
procedure is to use as a test statisticu=f(t), wheref(t) is any monotonically
increasing function oft; the corresponding rejection region is thenu<f(tcrit).
Similarly, one may use a test statisticv=g(t), whereg(t) is any monotonically
decreasing function oft; the rejection region then becomesv>g(tcrit). Finally,
we note that ifH 1 is also a simple hypothesisH 1 :a=a 1 , then (31.110) reduces
to the Neyman–Pearson test statistic (31.108).