232
developing a simulator used for credentialing urologists [ 6 ]. These content specifica-
tions will be used as a guide for subsequent evaluation during the validation process.
Following delineation of specifications concerning the content and the intended use
of simulator scores, specifications regarding tasks and scoring which eventually lead
to performance assessment should be delineated. Task specifications should cover all
aspects of the construct in a way that the domain of activities and the vital dimen-
sions of performance covered by each task should be described. In addition, the
number of tasks, duration of each task, and how the user interacts with each task
should be detailed. Scoring specifications describe how each item or task is being
scored and how the overall items are combined to give one overall score. The scoring
process is either analytic or holistic, and both are based on clear performance criteria.
The holistic scoring procedure generates only one overall score based on the perfor-
mance criteria. The analytic scoring procedure generates an independent score for
each item of the performance criteria in addition to the overall score. While the ana-
lytic scoring procedure could provide an idea about the points of weaknesses and
strengths of trainees, the holistic scoring procedure could be used whenever the skills
being assessed are highly interrelated, and overall judgment is only required. The
scoring process could be performed by either human judges (e.g., physical simula-
tors) or computer algorithms (e.g., virtual reality simulators). In the case of human
judges, scoring specifications should include qualifications of judges, how they were
trained, and how scoring discrepancies and bias among judges be checked and
resolved. In case of computer algorithms, scoring specifications should include how
scores are reproduced using algorithms. The degree to which the performance assess-
ment, guided by both the tasks and scoring specifications, reflects the domain or the
construct to be measured should be supported by both theoretical and empirical evi-
dence. This is particularly important for the validation process [ 6 ].
By the end of this step, a formal “requirements document” is generated, a “tech-
nology budget” is identified, and price sensitivity is assessed using the Simulator
Value Index (SVI) [ 10 ]. The SVI includes 17 parameters into an Excel® spread-
sheet, and it could be used to assess the simulator purchase process across stake-
holders, institutions, and countries. This step is important for market reassessment
as simulation development is expensive and the return on investment is slow.
Phase III: Development of Prototype(s) Following development and evaluation
of simulator specifications, development and verification of the items/tasks start
where simulator developers start creating items/tasks pool which consists of all pro-
posed items/tasks that are needed to be incorporated in the simulator to train/assess
what it is intended to train/assess. Thereafter, a set of items/tasks which meet the
simulator specifications are chosen and pretested or reviewed for accuracy, durabil-
ity, clarity, content, quality, and presence of any construct irrelevance prior to be
assigned task-specific description and scoring rubrics. The pretest or review process
is performed by reviewers who are aware of the content specifications and target
learner population who will be trained/assessed using this simulator. Simulator
developers should perform early verification and usability studies using the target
population. The analysis of data helps to identify some of the important aspects of
Y.A. Noureldin and R.M. Sweet