Surgeons as Educators A Guide for Academic Development and Teaching Excellence

(Ben Green) #1

110


11  h and 33  min for the crowdworkers. Interestingly, the internal consistency of
videos rated by experts was low, with an intra-class correlation coefficient of 0.38;
this variability again highlights one of the limitations of our current system of feed-
back for surgical trainees. There was consistent correlation between the expert and
crowdsourced ratings of video in aggregate (R = 0.82, p < 0.001), when separated
by surgeon level (R = 0.84, p < 0.001), and for task-specific assessment (R = 0.83,
p < 0.001). Though there were several limitations to this study, it suggests that large-
scale evaluation may indeed be generalizable beyond dry lab simulation tasks to live
human surgical procedures [ 51 ].
The increasing body of literature on the efficiency, cost-effectiveness, and
parity of large-scale feedback to expert feedback for a wide range of technical
tasks and surgical procedures has garnered the interest of the entrepreneurial
world. Emerging companies such as C-SATS, Inc. (Seattle, WA) have capitalized
on crowd-based wisdom to provide online platforms for surgical skills evalua-
tion. Such technology is becoming increasingly utilized for a multitude of surgi-
cal procedures across the fields of gynecology, urology, general surgery,
orthopedics, and reconstructive and plastic surgery. The promise of such large-
scale evaluation is also beginning to become recognized by educational leaders
across surgical subspecialties, such that it is starting to be explored in the devel-
opment of nationally standardized technical skills curricula as a means of vali-
dating new educational material [ 60 ]. Though several barriers remain to the
widespread adoption of crowdsourcing for technical skills evaluation, including
surgeon “buy-in,” requirement for intraoperative video capabilities, and a need
for a cultural shift in residency education [ 61 ], the current body of literature sug-
gests that it carries great potential for rapid and widespread use across surgical
training programs.


Models for the Integration of Crowd-Based Evaluation
in Surgical Skills Education


The early application of crowdsourced technology to surgical skills development
has largely utilized Amazon Mechanical Turk workers and examined surgical task
performance videos in a dry lab simulation setting using a variety of predefined
laparoscopic, robotic, and procedural tasks, including laparoscopic and robotic peg
transfer, suturing, and intracorporeal knot tying [ 27 , 47 , 50 , 52 , 54 ]. In the simula-
tion setting, the primary value of large-scale feedback lies in its objective, prompt,
and cost-effective assessment of trainees’ technical skills. Crowdsourcing has been
proposed as a mechanism to allow training programs to more efficiently identify
those trainees who lag behind in basic technical skills early in their training and
provide a consistent means of feedback to facilitate rapid remediation [ 27 ]. For
other trainees, feedback from crowds may facilitate the acquisition and mastery of
basic technical skills by providing the necessary feedback critical to successful
motor learning and deliberate practice [ 12 , 13 ].


J.C. Dai and M.D. Sorensen
Free download pdf