Model Estimation 281
of logL with respect to the mean μ and variance σ^2 or using commercial
software. We obtain the following estimates:
μ=
σ=
σ=
1.5600
0.4565
(^2) 0.2084
application of Mle to regression Models
We can now discuss how to apply the MLE principle to the estimation of
regression parameters. Consider first the regression equation (13.4). Assum-
ing samples are independent, the likelihood of the regression is the product
of the joint probabilities computed on each observation:
LPyxii xik
i
n
= ()
∏ ,,..., 1
1
(13.15)
Let’s assume that the regressors are deterministic. In this case, regressors
are known (probability equal to 1) and we can write
LPyxii xPik yx x
i
n
ii ik
i
= ()= ()
==
∏ ,,... 1 ,,...,
1
1
11
n
∏^ (13.16)
If we assume that all variables are normally distributed we can write
Py()tixx 10 , ..., ik ≈+Nx()ββ 11 ++... βσkkx,^2
We can write this expression explicitly as
Pyxx
yxx
ii ik
ikk
1
(^1011)
2
(), ...,e=−xp
( −− −−
σπ
ββ β ))
2
2 σ^2
and therefore:
LPyx x
yx
ii ik
i
n
i
= ()
=−
−− −
=
∏ 1
1
(^1011)
2
, ...,
exp
σπ
()ββ −
=
∏
β
σ
kk
i
n x^2
2
1 2
(13.17)
and
loglLn= og nylog i x
− ()−−−+
1
2
1
π^22011
σ
σ
()ββ −−
=
∑ βkk
i
n
x
2
1