Model Estimation 281
of logL with respect to the mean μ and variance σ^2 or using commercial
software. We obtain the following estimates:
μ=
σ=
σ=1.5600
0.4565
(^2) 0.2084
application of Mle to regression Models
We can now discuss how to apply the MLE principle to the estimation of
regression parameters. Consider first the regression equation (13.4). Assum-
ing samples are independent, the likelihood of the regression is the product
of the joint probabilities computed on each observation:
LPyxii xik
i
n
= ()
∏ ,,..., 1
1
(13.15)
Let’s assume that the regressors are deterministic. In this case, regressors
are known (probability equal to 1) and we can write
LPyxii xPik yx x
i
n
ii ik
i= ()= ()
==∏ ,,... 1 ,,...,
11
11n
∏^ (13.16)If we assume that all variables are normally distributed we can writePy()tixx 10 , ..., ik ≈+Nx()ββ 11 ++... βσkkx,^2We can write this expression explicitly as
Pyxxyxx
ii ikikk
1(^1011)
2
(), ...,e=−xp
( −− −−
σπ ββ β ))
22 σ^2and therefore:
LPyx xyxii ik
ini= ()
=−
−− −
=∏ 1
1(^1011)
2
, ...,
exp
σπ ()ββ −
=∏
β
σkk
in x^2
2
1 2(13.17)
and
loglLn= og nylog i x
− ()−−−+
1
2
1
π^22011σ
σ()ββ −−
=∑ βkk
in
x
2
1