Introduction to Probability and Statistics for Engineers and Scientists

(Sean Pound) #1

202 Chapter 6: Distributions of Sampling Statistics


problems, whereas those in which nothing is assumed about the form ofFare called
nonparametricinference problems.


EXAMPLE 6.1a Suppose that a new process has just been installed to produce computer
chips, and suppose that the successive chips produced by this new process will have useful
lifetimes that are independent with a common unknown distributionF. Physical reasons
sometimes suggest the parametric form of the distributionF; for instance, it may lead us
to believe thatFis a normal distribution, or thatFis an exponential distribution. In such
cases, we are confronted with a parametrical statistical problem in which we would want
to use the observed data to estimate the parameters ofF. For instance, ifFwere assumed
to be a normal distribution, then we would want to estimate its mean and variance; ifF
were assumed to be exponential, we would want to estimate its mean. In other situations,
there might not be any physical justification for supposing thatFhas any particular form;
in this case the problem of making inferences aboutFwould constitute a nonparametric
inference problem. ■


In this chapter, we will be concerned with the probability distributions of certain
statistics that arise from a sample, where astatisticis a random variable whose value is
determined by the sample data. Two important statistics that we will discuss are the sample
mean and the sample variance. In Section 6.2, we consider the sample mean and derive
its expectation and variance. We note that when the sample size is at least moderately
large, the distribution of the sample mean is approximately normal. This follows from
the central limit theorem, one of the most important theoretical results in probability,
which is discussed in Section 6.3. In Section 6.4, we introduce the sample variance and
determine its expected value. In Section 6.5, we suppose that the population distribution
is normal and present the joint distribution of the sample mean and the sample variance.
In Section 6.6, we suppose that we are sampling from a finite population of elements and
explain what it means for the sample to be a “random sample.” When the population size
is large in relation to the sample size, we often treat it as if it were of infinite size; this is
illustrated and its consequences are discussed.


6.2The Sample Mean


Consider a population of elements, each of which has a numerical value attached to it.
For instance, the population might consist of the adults of a specified community and the
value attached to each adult might be his or her annual income, or height, or age, and so
on. We often suppose that the value associated with any member of the population can
be regarded as being the value of a random variable having expectationμand variance
σ^2. The quantitiesμandσ^2 are called thepopulation meanand thepopulation variance,
respectively. LetX 1 ,X 2 ,...,Xnbe a sample of values from this population. The sample
mean is defined by


X=

X 1 +···+Xn
n
Free download pdf