How Professors Think: Inside the Curious World of Academic Judgment

(nextflipdebug5) #1

  1. In this sense, the book shares a kinship with the work of others who
    have been influenced by these traditions, such as Karin Knorr-Cetina and
    Bruno Latour.

  2. Bénatouïl (1999); Boltanski (2007b); Boltanski and Thévenot (1991);
    DiMaggio (1997); Garfinkel (1967); Geertz (1973); Goffman (1990);
    Thévenot (2007b).

  3. Latour and Woolgar (1979); Collins and Evans (2007). On expertise
    and controversies, see the 2003 debate inSocial Studies of Sciences(June and
    August 2003), vols. 3 and 4; also Gieryn (1983); Abbott (2001).

  4. Cole (1978; 1992); Cole, Cole, and Simon (1981); Liebert (1976); Mer-
    ton (1996); Mulkay (1991).

  5. Bell (1992); General Accounting Office (1994); Roy (1985). The most
    exhaustive study directly inspired by Merton is that of Stephen Cole, who ex-
    amined the evaluation of grant proposals submitted to the National Science
    Foundation in 1975 in the fields of chemical dynamics, solid-state physics,
    and economics—see Cole (1978); Cole and Cole (1981). Cole found a low
    level of consensus among the reviewers concerning which proposals should
    be funded and little effect of most investigators’ characteristics on the success
    of the proposal. He concluded that successful proposals were those that were
    of the highest “quality.” A more recent study of the peer review process at the
    National Science Foundation, National Endowment for the Humanities, and
    National Institutes of Health supported many of Cole’s findings, but also con-
    cluded that “the intrinsic qualities of a proposal (such as the research design
    and the importance of the question it addressed) were important factors in
    reviewers’ scoring.” See General Accounting Office (1994). None of these
    studies, however, focuses on the question of how the peer review panelists as-
    sess the quality of the proposals (captured by categories such as “originality,”
    “significance,” “feasibility,” etc.).

  6. Ilse Hartmann and Friedhelm Neidhardt provide the foundation for a
    model of how to accomplish this (1990). Using content analysis, they study
    which categories of evaluation (for example, qualifications/reputation of the
    principal investigator, preparatory work that had already been done, and sci-
    entific significance of the proposal) affect funding of grant proposals submit-
    ted to the Deutsche Forschungsgemeinschaft—the German equivalent of the
    National Science Foundation. They find more consensus in the grant-making
    process than did Cole and show the process to be less random than his analy-
    sis suggested: the categories of evaluation they identify accounted for 55 per-


264 / Notes to Pages 17–18

Free download pdf