The Internet Encyclopedia (Volume 3)

(coco) #1

P1: JDW


Zimmerman WL040/Bidgolio-Vol I WL040-Sample.cls June 20, 2003 17:20 Char Count= 0


INTEGRATINGUSABILITYINTO THEDESIGNPROCESS 519

Table 3Selected Methodologies for Usability Testing

TYPE OF INFORMATION
NEEDED SITE DEVELOPMENT LEVEL USABILITY TESTS
Formative Site objects (key parts or elements to be used in
the Web site design and content) and their
organization

Card sorting

Formative Site objects, conceptual design, and their
organization

Contextual inquiry

Formative Assessment of prototype Cognitive walkthrough
Evaluative Assessment of prototype Heuristic evaluation
Evaluative Assessment of prototype Pluralistic walkthrough
Evaluative Assessment of prototype or finished site Design review
Evaluative Assessment of prototype or finished site Verbal protocol
analysis
Remote Assessment of prototype or finished site Automated usage
tracking, and shared
windowing

provides ideas for the development of the Web site deve-
lopment. Evaluative information provides information for
revising and enhancing prototypes and finished Web site
design, content, and organization. For current Web sites,
evaluative information provides information for revising
and redesigning Web sites to make them easier to use.
The research questions, study design, and type of us-
ability testing suggest the variables to be measured. For
example, usability practitioners often measure the time
on task, the number of errors a participant makes, the
number of problems encountered, and the severity of the
problem. Severity can be measured in different ways, such
as how successful a participant was in solving the problem
or the time on task (i.e., how long it took the participant
to solve the problem).

Recruit Participants
Carefully profiling Web site users and recruiting par-
ticipants for usability testing is often one of the more
challenging and time-consuming tasks in usability test-
ing. In an ideal situation, having a quality list of members
of the intended users, pulling a random sample, and re-
cruiting sufficient numbers for statistical tests provides
generalizable data. The lack of valid user lists and lim-
ited budgets may necessitate purposive sampling, how-
ever, which limits the number of participants and reduces
the generalizability of test results.
For purposive sampling, a series of screening ques-
tions is generated to recruit participants as close to the
intended user profile as possible. Researchers can recruit
participants themselves, use marketing firms, or use spe-
cial interest groups and local nonprofits. Marketing firms
may charge $75 to $150 or more per participant, whereas
special interest groups and nonprofit groups will recruit
participants as fund-raising activities for $100 or more.
Take care to ensure that such groups recruit participants
fitting the profile of the Web site users.
As part of the recruiting process, include incentives
encouraging participants to take part in the usability

testing. Often termed “honorariums,” the incentive can
range from $10 to $100 or more depending on the in-
tended audience of Web site and difficulty in recruiting
participants. Honorariums are tokens of appreciation that
compensate participants for the time required to com-
plete the usability testing. If participants must travel to
a field or laboratory site, plan to cover travel and meal
costs.
Once a participant has been recruited, arrange the spe-
cific times and location of the usability testing. About
a week before the usability testing session, send parti-
cipants a letter confirming the time, date, and location of
the usability testing. Include a map, directions, and park-
ing instructions. A polite call the day before the usability
testing session reminds participants of their session.
The number of participants depends on the methodol-
ogy used. Nielsen (1993) for example, suggests discount
usability testing with as few participants as possible but
laces his discussion with caveats. Shneiderman (1998) la-
bels discount usability testing as quick and dirty. Low
numbers can produce erroneous information if they do
not represent the intended users. If usability testing fo-
cuses on comparing different designs, different organiza-
tions of a Web site or comparing other features, recruit
adequate numbers of participants to run inferential sta-
tistical tests. Rubin (1994) suggests a minimum of 10 to 12
per condition, and social scientists often recommend 15
to 20 participants for comparisons of between-subject de-
signs. Within-subjects design can use 4 to 10 participants
per group, but setting up the evaluation and running the
experiment is more difficult.
For verbal protocol analyses, a practical approach sug-
gests pretesting the methodology with three to six par-
ticipants, identifying the problems common to all partic-
ipants, and correcting those problems before engaging
in full protocol analyses with a larger number of parti-
cipants. Once the major problems are corrected, pretest
again, identifying major recurring problems before con-
ducting verbal protocol analysis with a larger number of
participants.
Free download pdf