Foundations of Cognitive Psychology: Preface - Preface

(Steven Felgate) #1

Tasks Capturing Real-Life Expertise. There are few instances of real-life exper-
tise in which superior performance can be demonstrated under relatively
standardized conditions. Mental calculators and memory experts provide such
instances. They often exhibit their performance under conditions similar to
those used in traditional experiments. In both of these cases it is easy to define
a large pool of different stimuli (e.g., 10 billion possible multiplications of two
5-digit numbers, or 100 trillion digit sequences of 14 digits). Drawing on this
pool of items, the experimenter can observe the performance in a large number
of different trials and accumulate information on the cognitive processes un-
derlying the expertise. Similarly, some types of psychomotor performance,
such as typing, and some sporting events can easily be imported into the labo-
ratory.
Apart from the preceding cases, the design of standardized tasks to capture
real-life expert performance is difficult. The problem is somewhat similar to
that of isolating phenomena in the natural and biological sciences. By careful
analysis of the expert performance in real life, we try to identify recurrent
activities that can be reproduced under controlled conditions. In those domains
in which expertise can be measured, it is important to restrict the focus to those
activities that are involved in producing the relevant performance or resulting
product. One should search for goal-directed activities that result in overt be-
havior that can be reproduced by presentation of the appropriate stimuli.
A nice illustration of this procedure comes from the previously described re-
search on chess, in which de Groot (1978) designed the task of selecting the best
next move for a given middle-game position. It should be possible to collect a
large number of such positions with which even top-level chess players would
be unfamiliar. In extracting out a single chess position from a chess game, one
isfacedwithaproblemthatiscommoninresearchonexpertise,namely,the
determination of the correct response, or the reliable evaluation of selected
moves. Given that currently there was no method available that could have
provided that information objectively, de Groot (1978) spent an extended
period carefully analyzing the selected chess position to evaluate the relative
merits of different moves. A different method of dealing with this problem was
offered in a recent study by Saariluoma (1984), who selected chess positions
that had clearly discernible best next moves. Both of these methods are oriented
toward finding or designing a small set of tasks, and they cannot easily be
extended into specifying a large population of tasks that could be claimed to
capture the chess expertise.
In most other complex task domains, such as physics and medical diagnosis,
investigators tend to select a small number of tasks without specifying the
population from which those tasks were chosen to be a representative sample.
One reason for this is that a detailed task analysis of even a single complex
problem is difficult and extraordinarily time-consuming. More important, our
knowledge of complex domains of expertise is incomplete, and it would not at
this time be possible to specify a population of tasks to capture such expertise.
Many scientists, however, are working on building expert systems in which the
tasks and prerequisite knowledge must be specified, and other researchers are
working on describing the formal characteristics of various task environments.
(see Charness, 1991).


Prospects and Limits of the Empirical Study of Expertise 529
Free download pdf