Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
542 Inferences About Normal Linear Models

Setting∂H(α, β)/∂α=0,weobtain

∑n

i=1

yi−nα−β

∑n

i=1

(xi−x)=0. (9.6.2)

Since

∑n
i=1(xi−x)=0,the equation becomes

∑n
i=1yi−nα= 0; hence, the mle of
αis
αˆ=Y. (9.6.3)
The equation∂H(α, β)/∂β=0yields,withαreplaced byy,

∑n

i=1

(yi−y)(xi−x)−β

∑n

i=1

(xi−x)^2 = 0 (9.6.4)

and, hence, the mle ofβis


βˆ=

∑n
i=1∑(Yi−Y)(xi−x)
n
i=1(xi−x)
2 =

∑n
∑i=1Yi(xi−x)
n
i=1(xi−x)
2. (9.6.5)

Equations (9.6.2) and (9.6.4) are the estimating equations for the LS solutions for
this simple linear model.
Thefitted valueat the point (xi,yi)isgivenby


ˆyi=ˆα+βˆ(xi−x), (9.6.6)

which is shown on Figure 9.6.1. The fitted value ˆyiis also called thepredicted
valueofyiatxi.Theresidualat the point (xi,yi)isgivenby


eˆi=yi−ˆyi, (9.6.7)

which is also shown on Figure 9.6.1. Residual means “what is left” and the residual
in regression is exactly that, i.e., what is left over after the fit. The relationship
between the fitted values and the residuals are explored in Remark 9.6.3 and in
Exercise 9.6.13.
To find the maximum likelihood estimator ofσ^2 , consider the partial derivative

∂[−logL(α, β, σ^2 )]
∂(σ^2 )
=

n
2 σ^2

∑n
i=1[yi−α−β(xi−x)]

2
2(σ^2 )^2
.

Setting this equal to zero and replacingαandβby their solutions ˆαandβˆ,we
obtain


ˆσ^2 =
1
n

∑n

i=1

[Yi−αˆ−βˆ(xi−x)]^2. (9.6.8)

Of course, due to the invariance of mles, ˆσ=



σˆ^2. Note that in terms of the resid-
uals, ˆσ^2 =n−^1

∑n
i=1eˆ
2
i. As shown in Exercise 9.6.13, the average of the residuals
is 0.
Free download pdf