Surgeons as Educators A Guide for Academic Development and Teaching Excellence

(Ben Green) #1
55

in surgical education focused on demonstrating construct validity, with this form of
validity typically being one of the first to be cited in the research literature once a
new method of training became available. It was defined as the ability to distinguish
between experts and novices and can be easily tested in a variety of settings.
Following traditional assessments of construct validity, new curriculums, simula-
tors, and trainers are then compared to the gold standard to establish concurrent
validity. Predictive validity remained the most elusive. In order to be established,
one needs to show how the teaching tool can estimate future performance and neces-
sitating long-term studies without participant attrition.
These definitions were based on the American Psychological Association (APA)
and American Educational Research Association (AERA) 1985 Standards for
Educational and Psychological Testing [ 2 ]. The Standards were revised in 1999,
which presented validity as a unitary concept, rather than a tripartite approach of
content, criterion, and construct validity [ 3 ]. The unitary view places construct
validity as the keystone and other forms of validity falling beneath it [ 4 ]. The 1999
Standards have since been widely supported but have yet to make their way into the
majority of MIS literature.
Both subjective and objective assessments aid in a trainees’ learning; however, a
wealth of technological tools now exist that can make subjective assessment less
central to a training regimen. Subjective assessment isn’t inherently bad, but is less
dependable between raters as well as expensive in terms of monetary cost and time.
Minimally invasive surgery is a growing field but populated by only a limited num-
ber of expert surgeons. Even with the small pool, there can be large discrepancies as
to what critical skills should look like or even which skills should be critical to
progression.
Objective measures allow for higher accountability and precise measurement of
skills, with the goal of a higher standard of patient safety; however, more research
is needed to link assessment of skills to increase in skill level. Being told where on
a scale one is can stimulate self-evaluation and targeted practice; however there is a
paucity of literature supporting one objective measure over another. In order to
practice both safe and efficient medical training, the medical community needs to
advance currently used and antiquated educational methods.


Laparoscopic Training and Assessment Platforms


MISTELS/FLS


Some standards have been established for MIS training and evaluation, including
standardized credentialing requirements. Within laparoscopic surgery, the McGill
Inanimate System for Training and Evaluation of Laparoscopic Skills (MISTELS), a
physical video box simulator was developed in an effort to provide a standardized
method for training and assessing fundamental technical skills associated with perfor-
mance of laparoscopic procedures. The five MISTELS manual skills tasks have shown
to be reliable, valid and to provide a useful educational tool [ 5 ], which has been


5 Performance Assessment in Minimally Invasive Surgery

Free download pdf