Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
326 Consistency and Limiting Distributions

Hence the pdf ofYnis

fYn(t)=

{ n
θnt

n− (^10) <t≤θ
0elsewhere.
(5.1.2)
Based on its pdf, it is easy to show thatE(Yn)=(n/(n+1))θ.Thus,Ynis a biased
estimator ofθ. Note, however, that ((n+1)/n)Ynis an unbiased estimator ofθ.
Further, based on the cdf ofYn, it is easily seen thatYn
P
→θand, hence, that the
sample maximum is a consistent estimate ofθ. Note that the unbiased estimator,
((n+1)/n)Yn, is also consistent.
To expand on Example 5.1.2, by the Weak Law of Large Numbers, Theorem
5.1.1, it follows thatXnis a consistent estimator ofθ/2, so 2Xnis a consistent
estimator ofθ. Note the difference in how we showed thatYnand 2Xnconverge to
θin probability. ForYnwe used the cdf ofYn, but for 2Xnwe appealed to the Weak
Law of Large Numbers. In fact, the cdf of 2Xnis quite complicated for the uniform
model. In many situations, the cdf of the statistic cannot be obtained, but we can
appeal to asymptotic theory to establish the result. There are other estimators of
θ. Which is the “best” estimator? In future chapters we will be concerned with such
questions.
Consistency is a very important property for an estimator to have. It is a poor
estimator that does not approach its target as the sample size gets large. Note that
the same cannot be said for the property of unbiasedness. For example, instead of
using the sample variance to estimateσ^2 ,supposeweuseV=n−^1
∑n
i=1(Xi−X)
(^2).
ThenV is consistent forσ^2 , but it is biased, becauseE(V)=(n−1)σ^2 /n.Thus
the bias ofV is−σ^2 /n, which vanishes asn→∞.
EXERCISES
5.1.1.Let{an}be a sequence of real numbers. Hence, we can also say that{an}
is a sequence of constant (degenerate) random variables. Letabe a real number.
Show thatan→ais equivalent toan
P
→a.
5.1.2.Let the random variableYnhave a distribution that isb(n, p).
(a)Prove thatYn/nconverges in probability top. This result is one form of the
weak law of large numbers.
(b)Prove that 1−Yn/nconverges in probability to 1−p.
(c)Prove that (Yn/n)(1−Yn/n) converges in probability top(1−p).
5.1.3.LetWndenote a random variable with meanμand varianceb/np,where
p>0,μ,andbare constants (not functions ofn). Prove thatWnconverges in
probability toμ.
Hint:Use Chebyshev’s inequality.
5.1.4.Derive the cdf given in expression (5.1.1).

Free download pdf