Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1

Chapter 5


Consistency and Limiting


Distributions


In Chapter 4, we introduced some of the main concepts in statistical inference,
namely, point estimation, confidence intervals, and hypothesis tests. For readers
who on first reading have skipped Chapter 4, we review these ideas in Section 5.1.1.
The theory behind these inference procedures often depends on the distribution
of a pivot random variable. For example, supposeX 1 ,X 2 ,...,Xnis a random
sample on a random variableXwhich has aN(μ, σ^2 ) distribution. Denote the
sample mean byXn=n−^1


∑n
i=1Xi. Then the pivot random variable of interest is

Zn=
Xn−μ
σ/


n

This random variable plays a key role in obtaining exact procedures for the con-
fidence interval forμand for tests of hypotheses concerningμ.WhatifXdoes
not have a normal distribution? In this case, in Chapter 4, we discussed inference
procedures, which were quite similar to the exact procedures, but they were based
on the “approximate” (as the sample sizengets large) distribution ofZn.
There are several types of convergence used in statistics, and in this chapter we
discuss two of the most important: convergence in probability and convergence in
distribution. These concepts provide structure to the “approximations” discussed
in Chapter 4. Beyond this, though, these concepts play a crucial role in much of
statistics and probability. We begin with convergence in probability.


5.1 Convergence in Probability


In this section, we formalize a way of saying that a sequence of random variables
{Xn}is getting “close” to another random variableX,asn→∞. We will use this
concept throughout the book.


321
Free download pdf