96
examining the perceptions of surgical residents and faculty regarding the degree of
resident involvement in operative cases have demonstrated that there is poor agree-
ment between faculty and residents regarding the role and percent of the case per-
formed by the resident; in only 47–58% of cases was there good correlation [ 17 ,
18 ]. Though ACGME operative case logs provide a general assessment of operative
case volume, their use to assess the overall surgical skill of trainees carries obvious
limitations.
Day to day, most trainees receive constructive feedback on specific operative
performances and technique from their attending surgeons to refine operative skills.
Such feedback may be provided informally and directly in the moment within the
operating room or more commonly may be presented indirectly in aggregate as part
of a formalized feedback mechanism at the conclusion of a rotation. However, the
quality, quantity, and formative values of this feedback may be highly variable.
Such feedback represents a single surgeon view, may carry a significant degree of
bias, and usually applies to a limited repertoire of observed surgical procedures [ 19 ,
20 ]. Moreover, the timing of such feedback may be significantly delayed. In one
survey conducted at a large academic orthopedic surgery program, 58% of residents
reported that end-of-rotation evaluations were rarely or never completed in a timely
fashion, with more than 30% of such evaluations completed over 1 month after a
rotation’s end. Moreover, the majority of residents and faculty members felt that
such end-of-rotation evaluations were inadequate for surgical skills feedback [ 21 ].
In practice, patient outcomes might provide an additional indirect measure of a
surgeon’s technical skills, as this has previously been linked to postoperative com-
plications [ 7 ]. However, as trainees operate under the supervision of more experi-
enced attending surgeons, their technical missteps are often immediately corrected
and therefore may not necessarily be reflected in a patient’s clinical course.
Moreover, postoperative outcomes are influenced by many other nonoperative vari-
ables such as patient disease, ancillary therapies, and post-hospitalization care that
limit direct correlation between outcomes and specific surgical techniques. The
delayed nature of such feedback also makes it challenging to directly link specific
technical aspects of the operation to the overall patient outcome. Lastly, the rotation-
based nature of surgical training programs does not always ensure the continuity of
care for trainees necessary for this to be a consistently useful form of feedback.
In an attempt to minimize the variability and subjectivity inherent in standard
evaluative mechanisms, more structured assessment tools have been developed to
help standardize and formalize feedback for specific surgical tasks (Table 6.2).
These include the validated Objective Structured Assessment of Technical Skills
(OSATS), the Global Operative Assessment of Laparoscopic Skills (GOALS), and
the Global Evaluative Assessment of Robotic Skills (GEARS) [ 22 – 24 ]. These
assessment tools have been largely used in evaluating videotaped performance in
simulation tasks but have also been used to assess intraoperative skills and have
been correlated to patient surgical complications and outcomes [ 7 , 25 ]. While these
tools allow evaluators to provide uniform and more objective feedback for a surgical
J.C. Dai and M.D. Sorensen