Statistical Methods for Psychology

(Michael S) #1
messages were found in the trash, which is a bit higher than we would expect if the ultimate
disposal of the fliers were independent of the message. If this difference is reliable, what
does this suggest to you about the effectiveness of the message?)
Finally we can take a simple example that illustrates both the additive and the multi-
plicative laws. What is the probability that over two trials (sampling with replacement) I
will draw one blue M&M and one green one, ignoring the order in which they are drawn?
First we use the multiplicative rule to calculate

Because these two outcomes satisfy our requirement (and because they are the only ones
that do), we now need to know the probability that one or the other of these outcomes will
occur. Here we apply the additive rule:

Thus the probability of obtaining one M&M of each of those colors over two draws is
approximately .08—that is, it will occur a little less than one-tenth of the time.
Students sometimes get confused over the additive and multiplicative laws because
they almost sound the same when you hear them quickly. One useful idea is to realize the
difference between the situations in which the rules apply. In those situations in which you
use the additive rule, you know that you are going to have oneoutcome. An M&M that you
draw may be blue or green, but there is only going to be one of them. In the multiplicative
case, we are speaking about at least twooutcomes (e.g., the probability that we will get one
blue M&M andone green one). For single outcomes we add probabilities; for multiple
independent outcomes we multiply them.

Sampling with Replacement


Why do I keep referring to “sampling with replacement?” The answer goes back to the is-
sue of independence. Consider the example with blue and green M&M’s. We had 24 blue
M&M’s and 16 green ones in the bag of 100 M&M’s. On the first trial the probability of a
blue M&M is .24/100 5 .24. If I put that M&M back before I draw again, there will still
be an .24/.76 split, and the probability of a blue M&M on the next draw will still be
24/100 5 .24. But if I did not replace the M&M, the probability of a blue M&M on Trial
2 would depend on the result of Trial 1. If I had drawn a blue one on Trial 1, there would
be 23 blue ones and 76 of other colors remaining, and p(blue) 5 23/99 5 .2323. If I had
drawn a green one on Trial 1, for Trial 2 p(blue) 5 24/99 5 .2424. So when I sample with
replacement, p(blue) stays the same from trial to trial, whereas when I sample without
replacementthe probability keeps changing. To take an extreme example, if I sample
without replacement, what is the probability of exactly 25 blue M&M’s out of 60 draws?
The answer, of course, is .00, because there are only 24 blue M&M’s to begin with and it
is impossible to draw 25 of them. Sampling with replacement, however, would produce a
possible result, though the probability would only be .0011.

Joint and Conditional Probabilities


Two types of probabilities play an important role in discussions of probability: joint proba-
bilities and conditional probabilities.
A joint probabilityis defined simply as the probability of the co-occurrence of two or
more events. For example, in Geller’s study of supermarket fliers, the probability that a flier
would bothcontain a message about littering andbe found in the trash is a joint probability,

p(blue, green) 1 p(green, blue)=.0384 1 .0384=.0768

p(green, blue)=.16 3 .24=.0384

p(blue, green)=.24 3 .16=.0384

116 Chapter 5 Basic Concepts of Probability


joint probability


sample without
replacement

Free download pdf