The Philosophy of Psychology

(Elliott) #1

to most real-life cases. For suppose that the conditional in question is, ‘For
all x, if x is a raven, then x is black’. It makes sense to test this by looking
for ravens, and perhaps by checking on the black things one comes across.
But it makes no practical sense at all to conduct a search ofnon-black
things, to try toWnd a potential falsiWer; there are just too many of them!
So conWrmation bias can be seen, not asXat-out irrational, but rather as an
overextension to the four-card case of a heuristic which is normally ap-
propriate and rational. And given that the heuristic may in any case be
implicit and non-conscious, it is easy to see how the overextension should
come about.
Even if processing eVort is available in abundance, pressure of time may
make a partially reliable procedure which comes up with a conclusion
swiftly of greater value than a slower but more exact computation. In other
words, ordinary human reasoning may well use ‘quick and dirty’ methods
for practical reasons. To give a simple example of a case involving explicit
calculation: if one wants to convert a Celsius temperature into degrees
Fahrenheit, the exact formula is: F = 9C/5 + 32. But if you are not too
bothered to a degree or two and only want to know approximately how
warm the weather is in terms of degrees Fahrenheit, then the approxi-
mation: F = 2C + 30 will serve well enough, while making the calculation
so much quicker and easier to do in your head. Here is an instance in which
those converting from one temperature scale to another will deliberately
adopt a somewhat rough-and-ready approximation which is good enough
for practical purposes.
If this kind of thing can happen at an explicit level of representation,
something similar may occur at implicit levels of representation and within
modular reasoning systems, as well. Tests of subjects on syllogistic reason-
ing, for example, suggest that assessments of validity by subjects without
training in logic are indeed based on approximate principles which work
fairly well, without being infallible (Oaksford and Chater, 1993, 1995). It is
also well established that subjects exhibit ‘belief bias’ in assessing argu-
ments – that is, they are much more willing to accept an argument with a
believable conclusion, apparently using the likelihood of the conclusion as
an index to the quality of the reasoning (Oakhill and Johnson-Laird, 1985;
Oakhillet al., 1989). Again, this is a fallible but quite reliable test, given
that subjects will normally only countenance arguments whose premises
they already believe.


5.1 Two notions of rationality

For these kinds of reasons, Evans and Over (1996) draw a distinction
between two diVerent notions of rationality, one of which is tied to general


126 Reasoning and irrationality

Free download pdf