How Professors Think: Inside the Curious World of Academic Judgment

(nextflipdebug5) #1

  1. Merton (1973, 293).

  2. Latour (1987).

  3. Guetzkow, Lamont, and Mallard (2004).

  4. Mallard, Lamont, and Guetzkow (2007).

  5. Levinson (2002).

  6. A British panelist was pleased that social significance did not figure
    prominently among the criteria that she was asked to consider. In her words,
    she was “kind of relieved that it wasn’t like so many of the British [competi-
    tions], with a [focus] on what kind of value for money, social usefulness, so-
    cial significance, significance to the development of the country, etc. I thought
    they’d escaped all of those really, really well.” In the United Kingdom, since
    the beginning of the 1980s, scholars have been required to consider the social
    utility of research. A large cross-disciplinary survey of research performance
    across disciplines guides the allocation of resources for all university research,
    in an effort to increase the influence and social relevance of research projects
    (Lamont and Mallard 2005). Cambridge anthropologist Marilyn Strathern
    has remarked that the diffusion of an audit culture in the academic world is
    difficult to criticize insofar as it promotes values of openness, transparency,
    and democracy, but she also notes that the government’s evaluative work re-
    sults in greater standardization and normalization of research practices: au-
    dits and performance assessment posit commensuration, i.e., the need to
    compare different units by using a single standard. Thus, an audit culture has
    a direct effect on the range and diversity of research being conducted. See
    Strathern (2000). On a similar point, see Espeland and Sauder (2007).

  7. Lamont (1989); Lamont and Wuthnow (1990); Cusset (2003).

  8. Bourdieu (1988).

  9. For standards of empirical rigor as they apply to qualitative research,
    see National Science Foundation (2004). See also the 2008 National Science
    Foundation report on shared standards across the social sciences, docu-
    mented in Lamont and White (2008).

  10. Camic and Gross (1998).

  11. Gerhard Sonnert’s 1995 quantitative study of the criteria by which
    American biologists evaluate the quality of their peers’ overall scientific con-
    tribution shows as the most powerful predictor annual publication produc-
    tivity rate (which explains 40 percent of the variance). This factor, the exis-
    tence of solo-authored publications, and graduate school prestige explain 59
    percent of the variance in quality rating.


Notes to Pages 171–187 / 281
Free download pdf