Because I am committed to preserving the anonymity of panel-
ists and removing information that could lead to their identification,
I do not provide a detailed analysis of the ways in which their em-
beddedness within networks may shape their evaluations. Neither do
I provide specific information concerning how distinctive organiza-
tional features influence evaluation. Per my agreement with the par-
ticipating organizations, the study explicitly does not concern the
functioning of specific funding agencies. Instead, I treat the agencies
as entry points to the world of peer review, channels through which
to tap the discourse on excellence that prevails in American universi-
ties and provide instantiation of this discourse. In a separate analysis,
focused on the criteria of originality, my colleagues and I considered
differences in the evaluative criteria used across panels. We did not
find significant differences.^4
Data Analysis
Interviews were tape-recorded and transcribed. I began by perform-
ing a qualitative analysis of the transcripts that was inductive and
thematic in nature, using analytic matrices that enable a systematic
identification of criteria of evaluation.^5 The analysis centered on the
differences and commonalities between and within disciplines by
focusing on criteria used by people within the fields under study. I
also paid particular attention to how people in different disciplines
assessed the quality of the same proposal or candidate, as well as
to how people in the same field assessed them. In addition, I consid-
ered the formal categories of evaluation and criteria provided by the
funding institutions, whether and how these were used by panelists,
and what other categories and criteria they used.
I also asked two research assistants to independently content-
analyze the interviews using the popular software Atlas.ti.^6 This soft-
ware package increases inter-coder reliability by making it possible
to standardize the set of codes to be used, track the codes assigned by
Appendix / 255