Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
4.1. Sampling and Statistics 227

This is called thelikelihood functionof the random sample. As an estimate of
θ, a measure of the center ofL(θ) seems appropriate. An often-used estimate is
the value ofθthat provides a maximum ofL(θ). If it is unique, this is called the
maximum likelihood estimator(mle), and we denote it asθ̂; i.e.,


θ̂=ArgmaxL(θ). (4.1.2)

In practice, it is often much easier to work with the log of the likelihood, that
is, the functionl(θ)=logL(θ). Because the log is a strictly increasing function,
the value that maximizesl(θ) is the same as the value that maximizesL(θ). Fur-
thermore, for most of the models discussed in this book, the pdf (or pmf) is a
differentiable function ofθ, and frequentlyθ̂solves the equation


∂l(θ)
∂θ

=0. (4.1.3)

Ifθis a vector of parameters, this results in a system of equations to be solved
simultaneously; see Example 4.1.3. These equations are often referred to as the mle
estimating equations, (EE).
As we show in Chapter 6, under general conditions, mles have some good prop-
erties. One property that we need at the moment concerns the situation where,
besides the parameterθ, we are also interested in the parameterη=g(θ)fora
specified functiong. Then, as Theorem 6.1.2 of Chapter 6 shows, the mle ofηis
η̂=g(̂θ), wherêθis the mle ofθ. We now proceed with some examples, including
data realizations.


Example 4.1.1(Exponential Distribution).Suppose the common pdf of the ran-
dom sampleX 1 ,X 2 ,...,Xnis the Γ(1,θ)densityf(x)=θ−^1 exp{−x/θ}with sup-
port 0<x<∞; see expression (3.3.2). This gamma distribution is often called the
exponential distribution. The log of the likelihood function is given by

l(θ)=log

∏n

i=1

1
θ

e−xi/θ=−nlogθ−θ−^1

∑n

i=1

xi.

The first partial of the log-likelihood with respect toθis


∂l(θ)
∂θ

=−nθ−^1 +θ−^2

∑n

i=1

xi.

Setting this partial to 0 and solving forθ, we obtain the solutionx. There is only one
critical value and, furthermore, the second partial of the log-likelihood evaluated
atxis strictly negative, verifying that it provides a maximum. Hence, for this
example, the statisticθ̂=X is the mle ofθ. BecauseE(X)=θ,wehavethat
E(X)=θand, hence,θ̂is an unbiased estimator ofθ.
Rasmussen (1992), page 92, presents a data set where the variable of interest
Xis the number of operating hours until the first failure of air-conditioning units
for Boeing 720 airplanes. A random sample of sizen= 13 was obtained and its

Free download pdf