Introduction to Probability and Statistics for Engineers and Scientists

(Sean Pound) #1

268 Chapter 7: Parameter Estimation


EXAMPLE 7.7b Combining Independent Unbiased Estimators. Letd 1 andd 2 denote inde-
pendent unbiased estimators ofθ, having known variancesσ 12 andσ 22. That is, for
i=1, 2,


E[di]=θ, Var(di)=σi^2

Any estimator of the form


d=λd 1 +(1−λ)d 2

will also be unbiased. To determine the value ofλthat results indhaving the smallest
possible mean square error, note that


r(d,θ)=Var(d)

=λ^2 Var(d 1 )+(1−λ)^2 Var(d 2 )
by the independence ofd 1 andd 2
=λ^2 σ 12 +(1−λ)^2 σ 22

Differentiation yields that


d

r(d,θ)= 2 λσ 12 −2(1−λ)σ 22

To determine the value ofλthat minimizesr(d,θ) — call itλˆ— set this equal to 0
and solve forλto obtain


2 λσˆ 12 =2(1−λˆ)σ 22

or


ˆλ= σ

2
2
σ 12 +σ 22

=

1/σ 12
1/σ 12 +1/σ 22

In words, the optimal weight to give an estimator is inversely proportional to its variance
(when all the estimators are unbiased and independent).
For an application of the foregoing, suppose that a conservation organization wants to
determine the acidity content of a certain lake. To determine this quantity, they draw some
water from the lake and then send samples of this water tondifferent laboratories. These
laboratories will then, independently, test for acidity content by using their respective
titration equipment, which is of differing precision. Specifically, suppose thatdi, the result
of a titration test at laboratoryi, is a random variable having meanθ, the true acidity of the
sample water, and varianceσi^2 ,i=1,...,n. If the quantitiesσi^2 ,i=1,...,nare known

Free download pdf