Fundamentals of Probability and Statistics for Engineers

(John Hannent) #1

Before proceeding, a remark is in order regarding the notation to be used. As seen
in Equation (9.2), our objective in parameter estimation is to determine a statistic


which gives a good estimate of parameter. This statistic will be called an
estimator for , for which properties, such as mean, variance, or distribution,
provide a measure of quality of this estimator. Once we have observed sample
values x 1 ,x 2 ,...,xn, the observed estimator,


has a numerical value and will be called an estimate of parameter.


9.2.1 U nbiasedness


An estimator is said to be an unbiased estimator for if


for all. This is clearly a desirable property for , which states that, on average,
we expect to be close to true parameter value. Let us note here that the
requirement of unbiasedness may lead to other undesirable consequences.
Hence, the overall quality of an estimator does not rest on any single criterion
but on a set of criteria.
We have studied two statistics,XandS^2 , in Sections 9.1.1 and 9.1.2. It is seen
from Equations (9.5) and (9.8) that, ifXandS^2 are used as estimators for the
population mean m and population variance^2 , respectively, they are unbiased
estimators. This nice property for S^2 suggests that the sample variance defined
by Equation (9.7) is preferred over the more natural choice obtained by repla-
cing 1/(n 1) by 1/n in Equation (9.7). Indeed, if we let


its mean is


and estimator S^2 has a bias indicated by the coefficient (n 1)/n.


Parameter Estimation 265


^ˆh…X 1 ;X 2 ;...;Xn†; … 9 : 20 †




^ˆh…x 1 ;x 2 ;...;xn†; … 9 : 21 †



^ 

Ef^gˆ; … 9 : 22 †

 ^

^ 





S^2 ˆ

1

n

Xn

iˆ 1

…XiX†^2 ; … 9 : 23 †

EfS^2 gˆ

n 1
n

^2 ;

(^) 

Free download pdf