Foundations of Cognitive Psychology: Preface - Preface

(Steven Felgate) #1

that we discussed earlier. Given a fixed set of parameters, one computes simi-
larity between test item and exemplar by multiplying the four parameters. As
examples, the similarity between a test item and exemplar that have different
values on every dimension would beaEHaESaNLaMH, while the similarity
between a test item and exemplar that have identical values on all dimensions
would be 1 1  1  1 ¼1. Some intermediate cases are shown in the middle part
of figure 12.4. The bottom part of figure 12.4 shows how these similarity com-
putations between test item and exemplar are cumulated over all relevant
exemplars to derive a final categorization of the test item. The probability of
assigning a test item to, say, concept A is equal to the sum of the similarities of
the test items to all stored exemplars of A, divided by the sum of the sim-
ilarities of the test item to all stored exemplars of both A and B (this instantiates
assumption 3b).
How much hinges on computing similarity by a multiplicative rule rather
than by an additive one? Quite a bit, as the two cases illustrated in the middle
part of figure 12.4 demonstrate. Following the multiplicative rule, instance 2
should be easier to learn and categorize than instance 1. This essentially reflects
the fact that instance 2 is highly similar (that is, differing on only one dimen-
sion) to two exemplars of category A (instances 1 and 3) but is not highly sim-
ilar to any exemplar of concept B; instance 1, on the other hand, is highly
similar to only one exemplar in A (instance 2) but to the first two exemplars in
B. Had we computed similarity by an additive rule, this prediction would re-
verse. This can be seen by noting that instance 1 shares an average of more than
two values with other exemplars of A, while instance 2 shares an average of
exactly two values with other A exemplars. (Both instances share the same
average number of values with B exemplars.) These contrasting predictions
were tested in a number of artificial-concept experiments by Medin and
Schaffer (1978), and the results uniformly supported the multiplicative rule:
instance 2 was learned faster and categorized more efficiently than instance 1.
In a follow-up study (Medin and Smith, 1981) we found that the superiority of
instance 2 held across widely different instructions, including ones that implic-
itly suggested an additive rule to subjects.
Admittedly, this particular contrast between multiplicative and additive
similarity computations is highly specific, and is probably only realizable with
artificial materials. Still, it provides some basis for favoring the context model’s
way of instantiating the exemplar-based processing assumptions over that
specified by the best-examples model. Other reasons for favoring the multi-
plicative rule will be given later in the chapter.


Explanations of Empirical Phenomena There is no need to detail how the context
model handles our standard list of phenomena, since these accounts are virtu-
ally identical to those given for the best-examples model. Again, the explicitly
disjunctive nature of an exemplar-based representation immediately accounts
for the existence of disjunctive concepts, the failure to specify defining proper-
ties, and the use of non-necessary properties during categorization. And to the
extent that the learning strategies posited by the context model eventuate in a
representation dominated by typical exemplars, the model would explain typi-
cality effects in the same manner as the best-examples model.


The Exemplar View 287
Free download pdf