AP Statistics 2017

(Marvins-Underground-K-12) #1

previously described a statistic as a value that describes a sample and a parameter as a value that
describes a population. Now we want use a statistic as an estimate of a parameter . We know that if we
draw multiple samples and compute some statistic of interest, say , that we will likely get different
values each time even though the samples are all drawn from a population with a single mean, μ . What
we now do is to develop a process by which we will use our estimate to generate a range of likely
population values for the parameter. The statistic itself is called a point estimate , and the range of likely
population values from which we might have obtained our estimate is called a confidence interval .


example: We do a survey of a sample of students from a school, and find that 42% of the sample
plan to vote for Normajean for student body treasurer. That is, = 0.42. Based on this, we
generate an interval of likely values (the confidence interval) for the proportion of all students
at the school who will vote for Normajean and find that between 38% and 46% of the students
are likely to vote for Normajean. The interval (0.38, 0.46) is a confidence interval for the
proportion of all students at this school who will vote for Normajean.
Note that saying a confidence interval is likely to contain the true population value is not to say that it
necessarily does. It may or may not—we will see ways to quantify just how “confident” we are in our
interval.
In this chapter, we will construct confidence intervals for a single mean, the difference between two
means, a single proportion, and the difference between two proportions. Our ability to construct
confidence intervals depends on our understanding of the sampling distributions for each of the
parameters. In Chapter 10 , we discussed the concept of sampling distribution for sample means and
sample proportions. Similar arguments exist for the sampling distributions of the difference between two
means or the difference between two proportions.


t Procedures


When we discussed the sampling distribution of in Chapter 10 , we knew the population mean and
standard deviation. In this chapter, we are estimating the population mean from a sample. If we knew the
population standard deviation, we would probably also know the population mean and would not need to
estimate μ ? Because we do not know the population standard deviation s , we use the sample standard
deviation σ as an estimate of σ .
When we estimate a standard deviation of a sampling distribution from data, we call the estimator the
standard error (some texts define the standard error as the standard deviation of the sampling
distribution rather than an estimate of the standard deviation based on our data).
In this case, then,


is  used    to  estimate

We will use the term standard error from now on as we study inference whenever we are referring to
the standard deviation of the sampling distribution.
When the population is approximately normal, the sampling distribution of is approximately normal


with mean μ and standard deviation . The sample mean, on average is equal to the population mean,

Free download pdf