Une /

Résumé | Bibliographie | Notes

Sérendipité.

Quality, not Quantity, in the US Academy.

‘The whole business of peer-reviewed journals has no effect on the external world and is just a Rube Goldberg machine designed to get people tenure.’

James C. Scott (2007: 385)

‘Accountability has turned to . . . bean-counting.’

Chester E. Finn, Jr., former Us Assistant Secretary of Education, on current schools policy (quoted in Dillon 2010)

Image1Common ‘wisdom’ in Europe and the Uk has it that monitoring devices—the Research Assessment Exercise [1] and output measurements developed at various European universities—are modeled after academic evaluations in the Us. There could be nothing further from the case!

I say this as an American involved in Us higher education since the mid-1970s (graduate degrees at Harvard and Mit, faculty member at various ranks, including department head, at Boston College, Boston University, Harvard Graduate School of Education, Northeastern University, and California State University-Hayward; in Amsterdam since 2005). As there are so many institutions of higher education in the US, of four main types—research universities, teaching universities, research-teaching colleges, and community colleges (judged by their purposes; a condensed version of the Carnegie Foundation for the Advancement of Teaching’s categories)—and as there is no single, central body mandating policies and procedures, it is not easy to make general statements about ‘US universities.’ To make sure that I wasn’t invoking ‘Garden of Eden’ memories, I surveyed a dozen US colleagues at a variety of research universities, of various ‘tiers,’ dispersed across the country, limiting myself to political science departments. These award degrees at all three levels: Bachelor’s, Master’s, and phd. From the replies, it in fact looks as if the Us lags far behind Europe on this score, although the troubling news is that some places appear to be catching up to the reporting and monitoring requirements common on this side of the Atlantic.

Let me compare my experience here with Us university requirements in my and my colleagues’ experiences. Here, as elsewhere in Europe and in other countries in its sphere of influence, departments, faculties, universities, and/or national ‘research schools’ create tiered lists of journals. Scholars are then awarded points, in one fashion or another, for publishing in A- or B-list journals, and penalized for publishing in lesser ones. A junior colleague at another university, different country, pulled his paper from a prospective special issue I am co-editing because the journal we planned to submit it to ranks as a C- or D-tier journal according to his university’s list, and publishing there would hurt him in his tenure evaluation. The ultimate in extra-disciplinary regulation of scholarship is the widespread perception in the Uk (at least in organizational studies) that books do not count for rae (research assessment exercise) points (which I was told when applying for a position there in 2004). At this point in time, Us universities have no such lists—although within disciplines, certain journals and book publishers have ‘better’ reputations than others—and books ‘count.’

Moreover, evaluation of a Us scholar’s publication record is tied to tenure (the sixth year after appointment; there may be a less formal third- or fourth-year evaluation, anticipating tenure review and allowing for mid-course correction of any problems identified) and promotion reviews, following schedule guidelines instituted decades ago by the American Association of Universities and widely adopted by member institutions. But these reviews take into account the full spectrum of published materials. Evaluation is based on being an active scholar—giving papers, reviewing manuscripts, etc.—, on teaching and on involvement in faculty governance, the so-called ‘community service’ component (which usually amounts to serving on departmental to university committees). Publication in ‘lesser’ journals would be balanced by publication in ‘higher’ ones, allowing scholars to publish in newer, hence lower or unranked journals as well as in specialized ones in the author’s particular field. Review committees consider the journal’s visibility, editorial board membership, and its standing in the scholar’s specific research field (e.g., public policy discourse analysis), among other factors. If a scholar can make a good case for why an article was placed in a particular journal, publication in a variety of outlets is possible.

As far as output expectations are concerned, Us departments have not, on the whole, committed to numbers in writing, going instead with a ‘general sense’. Across the responses I received, this averages out to a handful of articles and a book for tenure—or fewer articles if there is a book—but no ‘per year’ measure. Several colleagues emphasized that their departments look for quality over quantity.

Reporting is also tied to ‘merit pay’ evaluations where that is practiced (typically in institutions with unionized faculty [i.e., staff]), but this is commonly done through an updated cv rather than a separate, detailed report, and most Us scholars regularly update their cvs in any event, as it is often requested and, these days, typically posted online. So this is not additional work.

Annual reporting practices vary widely, encouraged in some universities, required in others, and elsewhere undertaken by some scholars voluntarily as a way of keeping colleagues, department heads, and/or deans updated on their activities. When done, these reports are typically for PR purposes, enabling a dean or a university president ‘bragging rights’ about her or his ‘fine faculty.’ But one reports on works in progress, works in press, teaching improvements, grants, awards, and other accomplishments, not just on publications—and no one asks for the issn of journals, nor is there a word limit below which an essay does not count! The kinds of statistical reports I have seen on this side of the Atlantic are rarely seen in the Us, if at all. Instead, department heads keep their deans up to date on local ‘issues,’ especially those with budgetary or personnel implications. The one worrisome sign is the advent of an expensive, private, for-profit company called Academic Analytics, founded in 2005, which measures faculty activities in ways not transparent to scholars at campuses that have purchased its services. It is not clear where the data come from or how they are being measured; scholars are not personally involved in the reporting. This is rather different from the annual Us News and World Report’s rankings, which are reputational based on peer reports within each of their categories and are intended to be informational for prospective student applicants.

Push-back in the Us against management-initiated, bureaucratically-driven measures of assessing scholarly quality is beginning, largely directed toward citation-counting practices. In another discipline, a leading scholar, writing in one of the field’s two top journals, says that

to create systems that pressure management scholars to publish in a particular subset of journals . . . would be particularly detrimental if it were to discourage management scholars from active participation in interdisciplinary work at a time when the emphasis [in the discipline] is on problem-centered work and the breakdown of disciplinary/departmental structures of the past for organizing work and the conduct of science. (Ilgen 2007: 509)

As Scott (2007: 385), pushing back in other ways, notes, in citations indices, self-citation counts; there are scholars who agree ahead of time to cite one another (i.e., they game the system); negatively critical citations count; books are not included; and English-language publications are privileged.

To date, the research-regulation concern most on Us academics’ minds has been irbs [2], rather than output measurement exercises. Although no scholar would (claim to) want to produce research unethically, the difficulty some social scientists have with these regulatory practices concerns their suitability for anything other than an experimental research design (Yanow and Schwartz-Shea 2008). Although negotiating with one’s campus’s institutional review board for permission to conduct field research can be prickly, in the end one recognizes that one is dealing with one’s colleagues, who are volunteering their time (as ‘community service’) for a shared goal—to make sure that human subjects are treated humanely. I.e., irb policies are about much more than protecting human subjects’ data privacy! This is quite a different process, and practice, from monitoring and controlling research output as practiced in Eu and Uk universities, whose outcome often keeps scholars from pursuing publication activities appropriate to their research—not publishing in journals where the scholarly conversation on their topic is taking place, because they are less highly ranked; competing for air-space against larger numbers of scholars submitting to the same few journals, rather than seeking outlets in respectable but newer journals where a newer scholar’s work might be published more quickly because there is less of a backlog and that, in this electronic day and age, still garner visibility and are searchable online; not developing book-length arguments, necessary for some topics that cannot be treated well in journal-length manuscripts—in short, a set of practices that are far from humane in their enactment and prosecution.

Illustration: daveypea, “P1020988”, 6.5.2007, Flickr[1] (Creative Commons[2] license).

Endnotes:
  1. Flickr: http://www.flickr.com/photos/daveyp/2335911376/
  2. Creative Commons: http://creativecommons.org/licenses/by-nc-sa/2.0/deed.en

Résumé

‘The whole business of peer-reviewed journals has no effect on the external world and is just a Rube Goldberg machine designed to get people tenure.’James C. Scott (2007: 385)‘Accountability has turned to . . . bean-counting.’Chester E. Finn, Jr., former Us Assistant Secretary of Education, on current schools policy (quoted in Dillon 2010)Common ‘wisdom’ in ...

Bibliographie

Notes

Auteurs

Partenariat

Sérendipité.

This page as PDF