The Essentials of Biostatistics for Physicians, Nurses, and Clinicians

(Ann) #1
5.2 Point Estimates 61

tendency and dispersion. These sample analogs are often used as point
estimates for the parameters. Sometimes, for a given population param-
eter from an assumed parametric family of distributions (e.g., the
normal distribution), there are two or more possible choices for a point
estimate.
For example, with continuous parametric families like the Gamma
and Beta distributions, we can fi nd maximum likelihood estimates or
method of moment estimates for the parameters. How then can we
choose an optimal estimate? Statistical theory has been developed to
defi ne properties that estimators should have. Among the nice proper-
ties, we have consistency, unbiasedness, minimum variance, minimum
mean square error, and effi ciency. Consistency is an important property.
It tells us that even though the sample is random and subject to vari-
ability, as the sample size gets larger, the estimate gets close to the true
parameter and will become arbitrarily close as n goes to infi nity.
The sample mean is consistent because if the population distribu-
tion has mean μ and standard deviation σ , then the sample mean has
for its sampling distribution mean μ and standard deviation σ/ n. So
as n gets larger, the standard deviation goes to zero. This is enough to
show consistency in probability.
The sample mean is also unbiased. To be unbiased, we must have
for every n that the sampling distribution for the estimator has its mean
equal to the true value of the parameter. We know this is the case for
the sample mean. If we consider the class of all unbiased estimators
for a parameter, we might consider the best estimate from this class to
be the one with the lowest variance.
We call these minimum variance unbiased estimates. However, even
a minimum variance unbiased estimator may not always be the best.
Accuracy is a measure of how close the estimate tends to be to the
parameter. An estimate with a small bias and small variance can be better
or more accurate than an estimate with no bias but a large variance.
To see this, let us consider mean square error. The mean square
error is the average of the squared distance between the estimator and
the parameter. It is natural to want the mean square error to be small.
Denote the mean square error by MSE , and B the bias, and σ 2 the vari-
ance of the estimator. It then happens that MSE = B 2 + σ 2. So mathe-
matically, what we have just said in word simply means that if one
estimator has MSE 11 =+B^2 σ 12 , and another estimator is unbiased with
mean square error MSE 22 =σ^2 , then MSE 2 > MSE 1 , if σ 22 >+B 12 σ 12.

Free download pdf