Person Perception in Audience Decision Making 275
faces. Moreover, the more fi xations the viewers made, the more accurate their identifi cation of the
other’s emotions.^202 To identify different emotions, viewers must fi xate on different parts of the
face.^203 Identifying anger, fear, and sadness requires viewers to scan the top half of the face. Identify-
ing disgust and happiness requires them to scan the bottom half. However, viewers can accurately
identify surprise by scanning either the top or bottom half of the face.
Audiences are remarkably fast at identifying the emotions of other people on the basis of their
facial expressions. It typically takes audiences only 12 to 25 milliseconds.^204 Audiences can identify
another’s facial expressions of emotions most rapidly when their facial expression is emotionally
congruent with their body posture.^205
Audiences are also remarkably accurate at identifying the emotions of other people on the
basis of their facial expressions.^206 Audiences recognize the facial expressions associated with joy
and disgust with close to 100% accuracy.^207 Their accuracy is due to the fact that smiling for joy
and nose wrinkling for disgust are highly specifi c expressions for those emotions. Audiences are
also quite accurate at judging others’ emotions even when they observe their facial expressions
only briefl y.^208 However, audiences can more accurately recognize another’s emotions when
they see a video of them making facial expressions as opposed to viewing a static photograph
of their face.^209
Brain Regions Activated. Neuroscientists fi nd that emotional faces elicit increased activity in
the viewer’s amygdala. Emotional faces also increase activity in core regions of the face-processing
system such as the fusiform face area located in the lower temporal and occipital lobes of both
hemispheres (see Figures 3.4 and 3.5 , p. 108).^210 The amygdala is especially attuned to recogniz-
ing facial expressions that signal danger or fear.^211 The brain region attuned to recognizing facial
expressions of disgust, on the other hand, is the insular cortex, located beneath the juncture of the
frontal, temporal, and parietal lobes.^212
Audiences can also make accurate judgments of others’ emotions on the basis of their bodily
postures. A study of audience members’ attributions of six emotions (anger, disgust, fear, happiness,
sadness, and surprise) to static body postures fi nds that although audience members often confuse
postures that indicate happiness and surprise, they rarely confuse postures indicating the other four
emotions.^213
When the audience can see a person move, their ability to recognize the person’s emotions is
signifi cantly enhanced.^214 Even the motion of isolated body parts, such as the manner in which
a person moves her arm while drinking a beverage, waving goodbye, or lifting a chair, can be
suffi cient to enable viewers to infer the emotional state of the person.^215 When observing anoth-
er’s movements, viewers infer anger or elation from faster, more energetic, and spatially expansive
movements. They infer contempt, sadness, and boredom from movements that are slower and
constricted.^216
Brain Regions Activated. Neuroscientists fi nd that emotionally expressive body postures and
movements evoke an elevated response in the viewer’s core body perception system located in the
temporal and occipital lobes.^217 Observation of such postures also increases activity in the amygdala
(see Figures 3.4 and 3.5 , p. 108).^218
In addition to using nonverbal visual cues to infer others’ emotions, audiences also infer emo-
tions on the basis of nonverbal vocal cues. In an analysis of how listeners evaluate emotional speech,
12 professional theater actors vocally expressed 14 different emotions. The analysis revealed that
listeners judged each emotion expressed by the actors, such as rage, panic, and elation, to have a dis-
tinct acoustic profi le. Listeners inferred anger when the actors spoke in a high and varied pitch that
also varied in intensity. Listeners inferred sadness when the actors spoke quietly. Listeners inferred
fear when the actors spoke quietly but in a sustained high pitch. And listeners inferred boredom
when the actors spoke quietly and slowly in a low pitched voice.^219