Introduction to Probability and Statistics for Engineers and Scientists

(Sean Pound) #1

272 Chapter 7: Parameter Estimation


To determine the constantcresulting in minimal mean square error, we differentiate to
obtain


d
dc

r(dc(X),θ)=

2 cnθ^2
(n+2)(n+1)^2

+

2 θ^2 n
n+ 1

(
cn
n+ 1

− 1

)

Equating this to 0 shows that the best constantc— call itc∗— is such that


c∗
n+ 2

+c∗n−(n+1)= 0

or


c∗=

(n+1)(n+2)
n^2 + 2 n+ 1

=

n+ 2
n+ 1

Substituting this value ofcinto Equation 7.7.4 yields that


r

(
n+ 2
n+ 1

max
i

Xi,θ

)
=

(n+2)nθ^2
(n+1)^4

+θ^2

(
n(n+2)
(n+1)^2

− 1

) 2

=

(n+2)nθ^2
(n+1)^4

+

θ^2
(n+1)^4

=

θ^2
(n+1)^2

A comparison with Equation 7.7.3 shows that the (biased) estimator (n +2)/
(n+1) maxiXi has about half the mean square error of the maximum likelihood
estimator maxiXi. ■


*7.8The Bayes Estimator


In certain situations it seems reasonable to regard an unknown parameterθas being the
value of a random variable from a given probability distribution. This usually arises when,
prior to the observance of the outcomes of the dataX 1 ,...,Xn, we have some information
about the value ofθand this information is expressible in terms of a probability distribution
(called appropriately thepriordistribution ofθ). For instance, suppose that from past
experience we know thatθis equally likely to be near any value in the interval (0, 1).
Hence, we could reasonably assume thatθis chosen from a uniform distribution on (0, 1).
Suppose now that our prior feelings aboutθare that it can be regarded as being the
value of a continuous random variable having probability density functionp(θ); and
suppose that we are about to observe the value of a sample whose distribution depends
onθ. Specifically, suppose thatf(x|θ) represents the likelihood — that is, it is the
probability mass function in the discrete case or the probability density function in the


* Optional section.
Free download pdf