Surgeons as Educators A Guide for Academic Development and Teaching Excellence

(Ben Green) #1
141

evaluations are important factors in generating such an assessment. Mellinger and
his colleagues compared educational assessment to work done by early astronomers
in which the accuracy of each individual measurement was of course important, but
only by combining multiple measurements from multiple observes over time could
a more true and complete picture of the universe and accurate extrapolations of the
future be made [ 9 ]. Although this is not a perfect analogy, important comparisons
can be instructive. Every individual measurement of any kind is hindered by a vari-
ety of factors that prevent perfect accuracy. In the case of astronomy, these factors
included the instruments used, atmospheric conditions, and human error on the part
of the astronomer among others. Similarly, in the operating room, human factors
(outside stressors, fatigue), the environment (case complexity), and assessment
instruments will affect the accuracy of each assessment performed. Fortunately,
there is a growing body of research to help refine the practice of assessment to pro-
vide high-quality data efficiently in the midst of a busy practice.
Historically, there may have been variable feedback and coaching within the
operating room, but recorded assessments happened infrequently, typically as
annual or semiannual evaluations asking each faculty member to reflect on their
time with the resident. The individual evaluations were then grouped into a single
report for each resident. Using the framework above, this method has several inher-
ent problems. In regard to the quality of the assessment itself, summarizing 6 months
of operative performance into a single account limits its ability to differentiate
among many variable operations and surgical skills. Recall bias will typically bring
to mind either the most recent experiences with a resident or particularly positive or
negative experiences, and previous research has demonstrated the negative impact
of delay in completion of operative assessments [ 10 ]. As much as these flaws affect
each individual evaluation, the small number of assessments generated by this
method is probably a more important limitation. By providing only a handful of
assessments across a resident’s tenure, it is much more difficult to determine whether
the evaluations truly represent their surgical competence or outlying measurements.
Whenever measurements of any kind are recorded, the more data points there are,
the easier it is to accurately determine the true normal distribution and the outliers.
This will also allow determining what led to outlying measurements and refining of
the measurement tool. Having more data points is especially important when trying
to extrapolate future performance which is the ultimate goal of operative assess-
ment. Research has demonstrated the importance of collecting a sufficient number
of evaluations to accurately portray a resident’s operative performance [ 11 ].
Infrequent evaluations by their nature also minimize the ability of the assessment to
identify struggling residents or act as a tool for residents to guide their own improve-
ment. These infrequent summative assessments have served as records of residents
meeting minimum requirements, but not much more. For all of the potential benefit
offered by more frequent evaluations, actually implementing these evaluations and
collecting and processing their data have been difficult without applying more
recent technological advances. In the midst of a busy surgical practice, the time and
inconvenience required by paper evaluations would be difficult obstacles to over-
come for many training programs. Fortunately, smartphones and internet-based


8 Teaching in the Operating Room

Free download pdf