Introduction to Probability and Statistics for Engineers and Scientists

(Sean Pound) #1

*7.7Evaluating a Point Estimator 267


Definition

Letd=d(X) be an estimator of the parameterθ. Then


bθ(d)=E[d(X)]−θ

is called thebiasofdas an estimator ofθ.Ifbθ(d)=0 for allθ, then we say thatdis
anunbiasedestimator ofθ. In other words, an estimator is unbiased if its expected value
always equals the value of the parameter it is attempting to estimate.


EXAMPLE 7.7a LetX 1 ,X 2 ,...,Xnbearandomsamplefromadistributionhavingunknown
meanθ. Then


d 1 (X 1 ,X 2 ,...,Xn)=X 1

and


d 2 (X 1 ,X 2 ,...,Xn)=

X 1 +X 2 +···+Xn
n
are both unbiased estimators ofθsince


E[X 1 ]=E

[
X 1 +X 2 +···+Xn
n

]

More generally,d 3 (X 1 ,X 2 ,...,Xn)=


∑n
∑n i=^1 λiXiis an unbiased estimator ofθwhenever
i= 1 λi=1. This follows since


E

[ n

i= 1

λiXi

]
=

∑n

i= 1

E[λiXi]

=

∑n

i= 1

λiE(Xi)


∑n

i= 1

λi

=θ ■

Ifd(X 1 ,...,Xn) is an unbiased estimator, then its mean square error is given by


r(d,θ)=E[(d(X)−θ)^2 ]

=E[(d(X)−E[d(X)])^2 ] sincedis unbiased
=Var(d(X))

Thus the mean square error of an unbiased estimator is equal to its variance.

Free download pdf