Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
396 Maximum Likelihood Methods

We may be interested though in testing thatμ=μ 0 ,whereμ 0 is a specified value.
Here we are not concerned about the parameterσ^2. UnderH 0 , the parameter space
is the one-dimensional spaceω={(μ 0 ,σ^2 ):σ^2 > 0 }.WesaythatH 0 is defined
in terms of one constraint on the space Ω.
In general, letX 1 ,...,Xnbe iid with pdff(x;θ)forθ∈Ω⊂Rp.Asinthelast
section, we assume that the regularity conditions listed in (6.1.1), (6.2.1), (6.2.2),
and (A.1.1) are satisfied. In this section, we invoke these by the phrase under
regularity conditions. The hypotheses of interest are


H 0 : θ∈ωversusH 1 :θ∈Ω∩ωc, (6.5.1)

whereω ⊂Ωisdefinedintermsofq,0<q≤p, independent constraints of
the formg 1 (θ)=a 1 ,...,gq(θ)=aq. The functionsg 1 ,...,gqmust be continuously
differentiable. This implies thatωis a (p−q)-dimensional space. Based on Theorem
6.1.1, the true parameter maximizes the likelihood function, so an intuitive test
statistic is given by the likelihood ratio


Λ=

maxθ∈ωL(θ)
maxθ∈ΩL(θ)

. (6.5.2)


Large values (close to 1) of Λ suggest thatH 0 is true, while small values indicate
H 1 is true. For a specified levelα,0<α<1, this suggests the decision rule


RejectH 0 in favor ofH 1 if Λ≤c, (6.5.3)

wherecis such thatα=maxθ∈ωPθ[Λ≤c]. As in the scalar case, this test often
has optimal properties; see Section 6.3. To determinec, we need to determine the
distribution of Λ or a function of Λ whenH 0 is true.
Let̂θdenote the maximum likelihood estimator when the parameter space is
the full space Ω and let̂θ 0 denote the maximum likelihood estimator when the


parameter space is the reduced spaceω. For convenience, defineL(Ω) =̂ L


(
̂θ

)
and

L(ω̂)=L


(
̂θ 0

)

. Thenwecanwritethelikelihood ratio test(LRT) statistic as


Λ=
L(ω̂)
L(Ω)̂

. (6.5.4)


Example 6.5.1(LRT for the Mean of a Normal pdf).LetX 1 ,...,Xnbe a random
sample from a normal distribution with meanμand varianceσ^2. Suppose we are
interested in testing
H 0 : μ=μ 0 versusH 1 : μ =μ 0 , (6.5.5)
whereμ 0 is specified. Let Ω ={(μ, σ^2 ):−∞<μ<∞,σ^2 > 0 }denote the full
model parameter space. The reduced model parameter space is the one-dimensional
subspaceω={(μ 0 ,σ^2 ):σ^2 > 0 }. By Example 6.4.1, the mles ofμandσ^2 under
Ωareμ̂=Xand̂σ^2 =(1/n)


∑n
i=1(Xi−X)

(^2) , respectively. Under Ω, the maximum
value of the likelihood function is
L(Ω) =̂
1
(2π)n/^2
1
(̂σ^2 )n/^2
exp{−(n/2)}. (6.5.6)

Free download pdf