103
Table 6.3
Summary of current studies evaluating the application of crowd-based evaluation of surgical skills
First author
Year
Task performers
Task
Evaluation mechanism
Average time to feedback
Rating agreement between crowds and experts
Compensation
Chen
2014
1 “above average” performer
Robotic knot tying
GEARS
a
Depth perception
Bimanual dexterity
Efficiency
Global domain
Verbal comments
Turk workers: 5 days Facebook users: 25 days Surgeons: 24 days
Similar ratings among 409 Turk workers, 67 Facebook users, and 9 teaching surgeons
$1.00/HIT
d
Holst
2015a
3 urology residents (PGY2, PGY4, PGY5) 2 urology faculty
Fundamentals of laparoscopic surgery (FLS) intracorporeal suturing
GEARS
a
Depth perception
Bimanual dexterity
Efficiency
Turk workers: 2 h, 50 min Surgeons: 26 h
Cronbach’s α = 0.91^2 r = 0.93
$0.50/ HIT
d
Holst
2015b
12 surgeons of varying robotic surgical experience
Live porcine robotic-
assisted
urinary bladder closures
GEARS
a
Bimanual dexterity
Depth perception
Efficiency
Force sensitivity
Robotic control
Turk workers: 4 h, 28 min Surgeons: 14 days
Cronbach’s α = 0.93^2 r = 0.91
$0.75/HIT
d
White
2015
49 surgeons
25 urology general surgery and obstetrics and gynecology PGY1–PGY6 trainees
24 faculty surgeons
Robotic pegboard transfer and suturing task
GEARS
a
Depth perception
Bimanual dexterity
Efficiency
Turk workers: 8 h, 52 min for suturing task; 108 h, 48 min for pegboard task
Pegboard task: Cronbach’s α = 0.84 Suturing task: Cronbach’s α = 0.92
$0.25/pegboard task$0.50/ suturing task
(continued)
6 Crowdsourcing and Large-Scale Evaluation