Mathematics for Computer Science

(avery) #1

Chapter 18 Random Variables764


his own hat back is1=n. There are many different probability distributions of hat
permutations with this property, so we don’t know enough about the distribution of
Gto calculate its expectation directly using equation (18.2) or (18.3). But linearity
of expectation lets us sidestep this issue.
We’ll use a standard, useful trick to apply linearity, namely, we’ll expressGas
a sum of indicator variables. In particular, letGibe an indicator for the event that
theith man gets his own hat. That is,GiD 1 if theith man gets his own hat, and
Gi D 0 otherwise. The number of men that get their own hat is then the sum of
these indicator random variables:


GDG 1 CG 2 CCGn: (18.9)

These indicator variables arenotmutually independent. For example, ifn 1 men
all get their own hats, then the last man is certain to receive his own hat. But again,
we don’t need to worry about this dependence, since linearity holds regardless.
SinceGiis an indicator random variable, we know from Lemma 18.4.2 that
ExŒGiçDPrŒGiD1çD1=n: (18.10)


By Linearity of Expectation and equation (18.9), this means that


ExŒGçDExŒG 1 CG 2 CCGnç
DExŒG 1 çCExŒG 2 çCCExŒGnç

D


‚ ...„n ƒ
1
n

C


1


n

CC


1


n
D1:

So even though we don’t know much about how hats are scrambled, we’ve figured
out that on average, just one man gets his own hat back, regardless of the number
of men with hats!
More generally, Linearity of Expectation provides a very good method for com-
puting the expected number of events that will happen.


Theorem 18.5.4. Given any collection of eventsA 1 ;A 2 ;:::;An, the expected
number of events that will occur is


Xn

iD 1

PrŒAiç:

For example,Aicould be the event that theith man gets the right hat back. But
in general, it could be any subset of the sample space, and we are asking for the
expected number of events that will contain a random sample point.

Free download pdf