Applied Statistics and Probability for Engineers

(Chris Devlin) #1
9-3.4 Likelihood Ratio Approach to Development of Test Procedures
(CD Only)

Hypothesis testing is one of the most important techniques of statistical inference. Throughout
this book we present many applications of hypothesis testing. While we have emphasized a
heuristic development, many of these hypothesis-testing procedures can be developed using a
general principle called the likelihood ratio principle. Tests developed by this method often
turn out to be “best” test procedures in the sense that they minimize the type II error probabil-
ity among all tests that have the same type I error probability .
The likelihood ratio principle is easy to illustrate. Suppose that the random variable Xhas
a probability distribution that is described by an unknown parameter , say, f(x, ). We wish
to test the hypothesis H 0 : is in  0 versus H 1 : is in  1 , where  0 and  1 are disjoint sets of
values (such as H 0 : 0 versus H 1 :  0). Let X 1 , X 2 , p, Xnbe the observations in a ran-
dom sample. The joint distribution of these sample observations is

Recall from our discussion of maximum likelihood estimationin Chapter 7 that the likeli-
hood function, say L(), is just this joint distribution considered as a function of the parameter
. The likelihood ratio principlefor test construction consists of the following steps:


  1. Find the largest value of the likelihood for any in  0. This is done by finding the
    maximum likelihood estimator of restricted to values within  0 and by substituting
    this value of back into the likelihood function. This results in a value of the likeli-
    hood function that we will call L( 0 ).

  2. Find the largest value of the likelihood for any in  1. Call this the value of the like-
    lihood function L( 1 ).

  3. Form the ratio


This ratio is called the likelihood ratiotest statistic.
The test procedure calls for rejecting the null hypothesis H 0 when the value of this ratio
is small, say, whenever k, where kis a constant. Thus, the likelihood ratio principle re-
quires rejecting H 0 when L( 1 ) is much larger than L( 0 ), which would indicate that the sam-
ple data are more compatible with the alternative hypothesis H 1 than with the null hypothesis
H 0. Usually, the constant kwould be selected to give a specified value for , the type I error
probability.
These ideas can be illustrated by a hypothesis-testing problem that we have studied
before—that of testing whether the mean of a normal population has a specified value  0.
This is the one-sample t-test of Section 9-3. Suppose that we have a sample of nobservations
from a normal population with unknown mean and unknown variance
2 , say, X 1 , X 2 , p,Xn.
We wish to test the hypothesis H 0 :   0 versus H 1 :   0. The likelihood function of the
sample is

e^ g

n
L a i 1 1 xi
 22  12
22

1

12 

b

n



L 1  02
L 1  12

f 1 x 1 , x 2 ,p, xn,  2 f 1 x 1 ,  2 f 1 x 2 ,  2 pf 1 xn,  2

9-1

PQ220 6234F.CD(09) 5/15/02 8:21 PM Page 1 RK UL 6 RK UL 6:Desktop Folder:TEMP WORK:MONTGOMERY:REVISES UPLO D CH114 FIN L:Quark F

Free download pdf