Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
28 Probability and Distributions

Example 1.4.7.Suppose we want to investigate the percentage of abused children
in a certain population. The events of interest are: a child is abused (A)andits
complement a child is not abused (N=Ac). For the purposes of this example, we
assume thatP(A)=0.01 and, hence,P(N)=0.99. The classification as to whether
a child is abused or not is based upon a doctor’s examination. Because doctors are
not perfect, they sometimes classify an abused child (A) as one that is not abused
(ND,whereNDmeans classified as not abused by a doctor). On the other hand,
doctors sometimes classify a nonabused child (N) as abused (AD). Suppose these
error rates of misclassification areP(ND|A)=0.04 andP(AD|N)=0.05; thus
the probabilities of correct decisions areP(AD|A)=0.96 andP(ND|N)=0.95.
Let us compute the probability that a child taken at random is classified as abused
by a doctor. Because this can happen in two ways,A∩ADorN∩AD,wehave


P(AD)=P(AD|A)P(A)+P(AD|N)P(N)=(0.96)(0.01) + (0.05)(0.99) = 0. 0591 ,


which is quite high relative to the probability of an abused child, 0.01. Further, the
probability that a child is abused when the doctor classified the child as abused is


P(A|AD)=

P(A∩AD)
P(AD)

=

(0.96)(0.01)
0. 0591

=0. 1624 ,

which is quite low. In the same way, the probability that a child is not abused
when the doctor classified the child as abused is 0.8376, which is quite high. The
reason that these probabilities are so poor at recording the true situation is that the
doctors’ error rates are so high relative to the fraction 0.01 of the population that
is abused. An investigation such as this would, hopefully, lead to better training of
doctors for classifying abused children. See also Exercise 1.4.17.


1.4.1 Independence...........................

Sometimes it happens that the occurrence of eventAdoes not change the probability
of eventB;thatis,whenP(A)>0,
P(B|A)=P(B).

In this case, we say that the events AandBareindependent.Moreover,the
multiplication rule becomes


P(A∩B)=P(A)P(B|A)=P(A)P(B). (1.4.4)

This, in turn, implies, whenP(B)>0, that


P(A|B)=

P(A∩B)
P(B)

=

P(A)P(B)
P(B)

=P(A).

Note that ifP(A)>0andP(B)>0, then by the above discussion, independence
is equivalent to
P(A∩B)=P(A)P(B). (1.4.5)


What if eitherP(A)=0orP(B) = 0? In either case, the right side of (1.4.5) is 0.
However, the left side is 0 also becauseA∩B⊂AandA∩B⊂B. Hence, we take
Equation (1.4.5) as our formal definition of independence; that is,

Free download pdf