The Internet Encyclopedia (Volume 3)

(coco) #1

P1: JDW


Zimmerman WL040/Bidgolio-Vol I WL040-Sample.cls June 20, 2003 17:20 Char Count= 0


USABILITYMETHODOLOGIES 515

Methodology
Although experts may differ in the way they apply the
specifics of card sorting methodologies, all techniques re-
quire a representative sample of Web site participants to
sort a stack of 3×5 index cards into groups. Each card dis-
plays the name of any of the Web site’s objects or pieces
of information on the site, such as downloadable code,
site functions and facilities, author information, content
headings, or topic information. Some cards provide addi-
tional information, others do not.
Working individually or in small groups, participants
organize cards in any way that is meaningful to them, cre-
ating any number of groups with any number of cards in
each group. Participants may move cards around, merge
card groups, or even separate card groups as they go.
When finished sorting, they label each group of cards. For
hierarchical Web sites, the first sort provides the lower
level of the site. To obtain a higher or more general level,
participants reorganize their piles into larger groups and
label them with general or comprehensive headings. At
the conclusion of all sorting tasks, the participants meet
individually with the researcher to describe each card
group, explaining why the cards were sorted into each
object category.

Data Analysis and Interpretation
Card sorting data can be interpreted manually or using
computerized software programs (Fuccella & Pizzolato,
1998; IBM, 1999; Martin, 1999b; National Institutes of
Standards, 2002). Card sorting produces results that may
be analyzed either qualitatively or quantitatively to es-
tablish the consensus among participants’ classification,
labeling, and organization of information. Comparing
similarities in grouping schemes across ten or more par-
ticipants provides adequate information to identify an
emerging site hierarchy. In the latter, statistical evalu-
ations, such as cluster analysis or similarity matrices,
reveal groupings of objects. Group interviews establish
consensus for labels and terminology.
The resulting data provide a basic structure of the Web
site (i.e. the number of layers, the labeling of the major
branches, and any replication of content within the Web
site structure). In interpreting the data, the key idea is
to capture the more frequently occurring structure and
labeling so that it does the best job of representing parti-
cipants’ ways of thinking about the Web site content and
organizing it.

Contextual Inquiry
Contextual inquiry, a field research technique borrowed
from anthropology, identifies how computer users inter-
act with computers as they to do their jobs. Holtzblatt
and Jones (1993) used contextual inquiry in the computer
industry in the early 1990s and developed a set of princi-
ples and practices to develop user-centered system design.
System engineers observed users’ actions as they went
about their workday to understand the types of informa-
tion users needed to work. In this method, engineers ac-
quired the data to design the hardware and software to
enhance users’ workflow and usage patterns. Since then,
others have adapted this technique as a general method
for information gathering beyond systems analysis.

Methodology
Contextual inquiry is not a step-by-step process, but a set
of concepts defining how user information is collected and
analyzed (Holtzblatt & Jones, 1993). It requires under-
standing the user’s work environment, establishing a col-
laborative relationship between user-technology and the
Web site designer, and acknowledges the perspective from
which the designer approaches the inquiry.
A team of designers observes users throughout a typ-
ical workday and asks them to articulate what they are
doing and thinking. The designers explore any confusion
or contradictions in their assumptions about the user’s
workflow or activities, continually validating any new in-
formation with the user. Probing questions expand the
designer’s understanding of the user’s job. Users lead the
discussion, covering what they determine is important,
without direction from the designer.

Data Analysis and Interpretation
Once the designers complete all inquiries and compile
their data, they begin the analyses. They review all data,
interpret it, and share ideas, issues, observations, and
questions. They discuss their individual and collective
foci, generating their understanding and interpretations
of how users will interact with the technology. Next, they
reorder and group similar ideas and thoughts together
and label the groupings. The labels represent the work do-
mains and design areas for the system. The items beneath
the labels comprise the developers’ design ideas and users’
work details. Once complete, the designers have a full pic-
ture of computer system requirements to fit users’ needs.
As usability practitioners work with the data, they be-
gin to interpret the findings and to make sense of how
users work, what they do, and how they go about their
daily activities. The key then becomes translating and in-
terpreting the findings into the implications for the design
of the Web site.

Heuristic Evaluations
Heuristic evaluations, sometimes called expert reviews,
are systematic inspections of Web sites judged against
usability design standards. The technique identifies ma-
jor usability problems and produces a comprehensive
site evaluation, focusing on the site’s functions as they
relate to users’ abilities and needs.

Methodology
Because heuristic evaluations measure multiple facets of
a Web site and different usability practitioners will dis-
cover different kinds of problems, two or more usability
practitioners, working alone, should review a Web site and
identify its strengths and weaknesses. These experts use
a predetermined checklist of design principles that pro-
duce highly usable sites. Usability practitioners first nav-
igate the Web site to obtain a sense of the site’s global
presentation and an understanding of how individual el-
ements interact. Then they navigate the Web site as many
times as necessary to evaluate independent elements sys-
tematically against usability principles. While they test in-
dividual elements against a checklist, any Web site feature
warranting further investigation is examined closely.
Free download pdf