63
simulations but include measures such as total procedure time, idle time in seconds,
number of movements, total path length, and average speed individually for left and
right instruments. Any task complications, such as bleeding or tissue damage, are
also reported. Task-specific metrics would be seen for energy application, correct
knots, etc. An attempt can be compared to those previously completed using the
displayed learning curve.
The proficiency score board marks a check mark if the task reached proficient
levels, as well as indicating the best score measured as a percentage, a star if the
required number of consecutive or nonconsecutive attempts was made (this quantity
is set by an administrator), total count of attempts, and the required skill level.
Clicking on this reveals more refined information, such as total time, path length,
accuracy, and additional task-specific metrics.
Benchmarks are set on a scale from 1 to 5. Less than 2 is noted as poor, between
2 and 4 average, with a 4 being the set proficient mark. Greater than 4 yields an
expert, or even superior level when over 4.5. Benchmarks are only seen when met-
rics have a defined skill level. The scale is color coordinated, with poor and average
scores being red and yellow, respectively, while scoring a 4 or higher shows green.
Scores can be viewed and exported to a CSV file.
Another laparoscopic VR simulator that has demonstrated validity is the LapVR,
created by CAE Healthcare of Sarasota, Florida. LapVR, similar to the Lap Mentor,
is a single tower unit, with an internal computer, flat screen monitor, foot pedals,
and exchangeable instrument grips. The simulated camera and instruments provide
haptic feedback to the trainee.
a
b
Fig. 5.10 Representative
general (a) and detailed (b)
metrics from Lap Mentor
score sheet. Image
courtesy of 3D systems,
formerly Simbionix
5 Performance Assessment in Minimally Invasive Surgery