Cognition ❮ 141
Models of Memory
Different models are used to explain memory. No model accounts for all memory phenomena.
Information Processing Model
The general information processing model compares our mind to a computer. According
to this model, input is information. First input is encoded when our sensory receptors send
impulses that are registered by neurons in our brain, similar to getting electronic information
into our computer’s CPU (central processing unit) by keyboarding. We must store and
retain the information in our brain for some period of time, ranging from a moment to a
lifetime, similar to saving information in our computer’s hard drive. Finally, information
must be retrieved upon demand when it is needed, similar to opening up a document or
application from the hard drive.
Because we are unable to process all incoming sensory stimulation that is available, we
start seeking out, focusing on, and selecting aspects of the available information. Donald
Broadbent modeled human memory and thought processes using a flowchart that showed
competing information filtered out early, as it is received by the senses and analyzed in the
stages of memory. Attention is the mechanism by which we restrict information. Trying
to attend to one task over another requires selective or focused attention. We have great
difficulty when we try to attend to two complex tasks at once requiring divided attention,
such as listening to different conversations or driving and texting. In dichotic listening
experiments, participants heard different messages through left and right headphones
simultaneously. They were directed to attend to one of the messages and repeat back the
words (shadow it). Very little about the unattended message was processed, unless the
participant’s name was said, which was noticed (the cocktail party effect). When the cock-
tail party effect occurred, information was lost from the attended ear. According to Anne
Treisman’s feature integration theory, you must focus attention on complex incoming
auditory or visual information in order to synthesize it into a meaningful pattern.
Levels-of-Processing Model
According to Fergus Craik and Robert Lockhart’s levels-of-processing theory, how long
and how well we remember information depends on how deeply we process the informa-
tion when it is encoded. With shallow processing, we use structural encoding of superficial
sensory information that emphasizes the physical characteristics, such as lines and curves,
of the stimulus as it first comes in. We assign no relevance to shallow processed informa-
tion. For example, once traffic passes and no more traffic is coming, we cross the street.
We notice that vehicles pass, but don’t pay attention to whether cars, bikes, or trucks
make up the traffic and don’t remember any of them. Semantic encoding, associated with
deep processing, emphasizes the meaning of verbal input. Deep processing occurs when
we attach meaning to information and create associations between the new memory and
existing memories (elaboration). Most of the information we remember over long periods
is semantically encoded. For example, if you noticed a new red sports car, just like the one
you dream about owning, zoom past you with the license plate, “FASTEST1,” and with
your English teacher in the driver’s seat, you would probably remember it. One of the best
ways to facilitate later recall is to relate the new information to ourselves (self-referent
encoding), making it personally meaningful.
Three-Stage Model
A more specific information processing model, the Atkinson-Shiffrin three-stage model
of memory, describes three different memory systems characterized by time frames: sensory