Advanced High-School Mathematics

(Tina Meador) #1

SECTION 6.1 Discrete Random Variables 327


We can give an intuitive idea of how one can analyze this question,
as follows. We start by settingXk=
k
k^2 /^3


, k= 1, 2 , ...,and note that

E(Xk) = 0 and Var(Xk) =


1

k^4 /^3

. Now set


Sn =

∑n
k=1

Xk =

∑n
k=1

k
k^2 /^3

It follows immediately thatSnhas mean 0 and (finite) variance
∑n


k=1


1

k^4 /^3

<4. (See footnote^8 )

Under these circumstances it follows that the above infinite sum


∑n
k=1

Xk

actually converges with probability 1.^9 Furthermore, the same argu-
ments can be applied to show that as long asp > 1 /2, then the random


series


∑∞
n=1

n
np

also converges with probability 1.

We turn now to some relatively commonly-encountered discrete ran-
dom variables, thegeometric, thebinomial, thenegative binomial,
thehypergeometric, and thePoissonrandom variables.


6.1.4 The geometric distribution


Consider the following game (experiment). We start with a coin whose
probability of heads isp; therefore the probability of tails is 1−p. The
game we play is to keep tossing the coin until a head is obtained. The
random variable X is the number of trials until the game ends. The
distribution forXas follows:


x 1 2 3 ··· n
P(X=x) p p(1−p) p(1−p)^2 ··· p(1−p)n−^1

Therefore, the expectation ofXis given by the infinite sum:


(^8) Note that
∑n
k=1
1
k^4 /^3 <
∑∞
k=1
1
k^4 /^3 <1 +
∫∞
1
x−^4 /^3 dx= 4.
(^9) This can be inferred from theKolmogorov Three-Series Theorem, see, e.g., Theorem 22.8
of P. Billingsley,Probability and Measure, 2nd ed.John Wiley & Sons, New York, 1986.

Free download pdf