Statistical Methods for Psychology

(Michael S) #1
Before we had the results of the drug test our subjective probability of his guilt was .10
because only 10% of the team used steroids. After the positive drug test our subjective
probability increased, but perhaps not as much as you would have expected. The posterior
probability is now .413.
As I said above, one of the powerful things about Bayes’ theorem is that you can work
with it iteratively. In other words you can now collect another piece of data (perhaps that
he has a needle in his possession), take .413 as your new prior probability and include
probabilities associated with the needle, and calculate a new posterior probability. In other
words we can accumulate data and keep refining our estimate.
A second feature of Bayes’ theorem is that it is useful even if some of our probabilities
are just intelligent guesses. For example, if the drug company had refused to tell us how
many nonusers tested positive and we took .20 as a tentative estimate, our resulting poste-
rior probability would be .345, which isn’t that far off from .413. In other words, weak evi-
dence is still better than no evidence.

A Second Example


There has been a lot of work in human decision making that has been based on applications
of Bayes’ theorem. Much of it focuses on comparing what people shoulddo or say in a sit-
uation, with what they actuallydo or say, for the purpose of characterizing how people re-
ally make decisions. A famous problem was posed to decision makers by Tversky and
Kahneman (1980). This problem involved deciding which cab company was involved in an
accident. We are told that there was an accident involving one of the two cab companies
(Green Cab and Blue Cab) in the city, but we are not told which one it was. We know that
85% of the cabs in that city are Green, and 15% are Blue. The prior probabilities then,
based on the percentage of Green and Blue cabs, are .85 and .15. If that were all you knew
and were then told that someone was just run over by a cab, your best estimate would be
that the probability of it being a Green cab is .85. Then a witness comes along who thinks
that it was a Blue cab. You might think that was conclusive, but identifying colors at night
is not a foolproof task, and the insurance company tested our informant and found that he
was able to identify colors at night with only 80% accuracy. Thus if you show him a Blue
cab, the probability that he will correctly say Blue is .80, and the probability that he will
incorrectly say Green is .20. (Similarly if the cab is Green.) So our conditional probability
that the cab was a Blue cab, given that he said it was Blue is .80, and the conditional prob-
ability that it was Green given that he said it was Blue is .20. This information is sufficient
to allow you to calculate the posterior probability that the cab was a Blue cab given that the
witness said it was blue.
In the following formula let Bstand for the event that it was a Blue cab, and let bstand
for the event that the witness called it blue. Similarly for Gand g.

Most of the participants in Tversky and Kahneman’s experiment guessed that the proba-
bility that it was the blue cab was around .80, when in fact the correct answer is approxi-
mately .41. Thus Kahneman and Tversky concluded that judges place too much weight on

=


.12


.12 1 .17


=


.12


.29


=.414


=


(.80)(.15)


(.80)(.15) 1 (.20)(.85)


p(B|b)=

p(b|B)p(B)
p(b|B)p(B) 1 p(g|B)p(G)

Section 5.7 Bayes’ Theorem 125
Free download pdf