Wednesday 23 September 2009

REF consultation document published

For anyone interested in how research funding is allocated (fascinating stuff, I know), a consultation document on the Research Excellence Framework (REF) is now available here. REF is the mooted replacement for the old Research Assessment Exercise (RAE), the last one of which was conducted in 2008. Enjoy...

Friday 18 September 2009

Playing the game: the impact of research assessment

Yesterday I was sent this report, produced by the Research Information Network, in conjunction with the Joint Information Systems Committee, and entitled "Communicating knowledge: How and why UK researchers publish and disseminate their findings". The report used a literature review, bibliometric analysis, an online survey of UK researchers, and focus groups or interviews with researchers to look at how and why researchers put information into the public domain. Being an early-career researcher, I'm interested in this sort of thing: I know why I'm publishing and disseminating information, but it's interesting to see why everyone else is doing it. It's also interesting to see the extent to which research assessment in the UK - until recently the Research Assessment Exercise (RAE) and in future the mysterious Research Excellence Framework (REF) - influence the decisions that researchers make. What particularly struck me about the report was the number of times researchers talked about "playing games": the framework of research assessment is seen as a game to be played, with the needs of research being subordinated to the need to put in a good performance. This has important implications for the REF, in which bibliometric indicators are likely to play an important role.

The key point of the report is that there is some confusion among researchers about what exactly it is they're supposed to be doing. There are conflicting and unclear messages form different bodies about what sort of research contributions are valued. The perception is that the only thing that really counts in terms of research assessment is peer-reviewed journal articles. Other contributions, such as conference proceedings, books, book chapters, monographs, government reports and so on are not valued. As a result, the proportion journal articles compared to other outputs increased significantly between 2003 and 2008. A couple of comments by researchers quoted in the report (p.15):

[There is] much more emphasis on peer reviewed journals …Conferences, working papers and book chapters are pretty much a waste of time … Books and monographs are worth concentrating on if they help one demarcate a particular piece of intellectual territory.


There is a strong disincentive to publish edited works and chapters in edited works, even though these are actually widely used by researchers and educators in my field, and by our students.


This is certainly the impression I get from my own field. In fact, I have been advised by senior colleagues to target high-impact journals, rather than, for example, special publications. I have never received any formal guidance on what research outputs are expected of me, but the prevailing atmosphere gives the impression that it's all about journal articles. After publishing a couple of things from my PhD, it took another three years to publish anything from my first post-doc. I worried about that: it seemed that the numerous conferences and internal company reports and presentations I produced over that time counted for nothing career-wise.

The report makes it clear that, in the case of the RAE, it is more perceptios than the reality causing the problem: the RAE rules meant that most outputs were admissible, and all would be treated equally. But it's perceptions that drive the way researchers respond to research assessment. Clearer guidance is needed.

An interesting point brought up by the report is how, when there is more than one author for a journal article, the list of authors is arranged. In my field, authors are typically listed in order of contribution, so I was surprised to find that this is by no means always the case. In some fields, especially in the humanities and social sciences, authors are commonly listed alphabetically. In some cases, the leader of the research group is listed first, in other cases last. And there are various mixtures of listing by contribution, grant-holding and alphabetic order. There is even a significant minority where papers based on work done by students have the student's supervisor as first author! This means that there is no straightforward way of apportioning credit to multiple authors of a paper, something that David Colquhoun has already pointed out. This is a huge problem for any system of assessment based on bibliometrics.

The report also examines how researchers cite the work of other people. Other researcher's work should be cited because it forms part of the background of the new research, because it supports a statement made in the new paper, or as part of a discussion of how the new paper fits into the context of previous research. Crucially, this includes citing work with which the authors disagree, or that is refuted or cast into doubt in the light of the new work (p.30):

Citing somebody often indicates opposition / disagreement, rather than esteem and I am as likely to cite and critique work that I do not rate highly as work I value.

So any system that relies on bibliometric indicators is likely to reward controversial science as much as good science (not that those categories are mutually exclusive, but they don't completely overlap either).

Researchers are perfectly clear that a system based on bibliometrics will cause them to change their publication behaviour: 22% will try to produce more publications, 33% will submit more work to high-status journals, 38% will cite their collaborators work more often, while 6% will cite their competitors work less often. This will lead to more journal articles of poorer quality, a the decline of perfectly good journals that have low "impact", and corruption in citation behaviour. In general, researchers aren't daft, and they've clearly identified the incentives that would be created by such a system.

The report presents a worrying picture of research, and scientific literature, distorted by the perverse incentives created by poorly thought-out and opaque forms of research assessment. It can be argued that scientists who allow their behaviour to be distorted by these incentives are acting unprofessionally: I wouldn't disagree. But for individuals playing the game, the stakes are high. Perhaps we ought to be thinking about whether research is the place for playing games. It surely can't lead to good science.

Wednesday 16 September 2009

I get e-mail

Got this today, sent out to academic and academic-related staff in my department:

Dear All,

Please find attached NSS results by Faculty, School and JACS Level 3 subjects. Also included is a mapping document to accompany the JACS report to assist you in understanding which programmes of study are included under each heading. The Word document, 'APPENDIX 06-Surveys - NSS Table EPS.doc' shows the data that will be included in the OPR documentation.

Please note that the data is FOR INTERNAL USE ONLY.


I have no idea what NSS, JACS or OPR mean, so this e-mail makes no sense to me whatsoever. I seem to be getting an increasing number of these things, all with acronyms I've never heard of.

What happens when you don't have peer review

Normally, when a scientific paper is submitted, it is subjected to scrutiny by two or more scientists working in a similar field. Only if the paper gets through this peer review process, and if corrections required by the reviewers have been made, does the paper actually get published. This process is by no means perfect: bad papers slip through, and good papers get blocked by over-zealous reviewers. But there are two examples this week of what can go wrong when papers are not peer reviewed.

Firstly, Ben Goldacre and Respectful Insolence discuss the case of two papers, recently published in Medical Hypotheses, that were so bad they were withdrawn by publishers Elsevier. Given that Elsevier happily publishes Homeopathy, the fanzine of the Faculty of Homeopathy, this should give pause for thought. Medical Hypotheses is a bit of an oddity: it does not send papers out for peer review. Rather, they are approved solely by the editor of the journal, one Bruce Charlton. It appears that many papers are approved within days, sometimes hours, of being submitted, suggesting that there is very little scrutiny of the papers.

The two papers are one by Duesberg et al., and one by Ruggiero et al., both of which seek to deny the magnitude of the AIDS crisis. Seth Kalichman of the Denying Aids blog did an experiment by sending the manuscript out for blind peer review. All three "reviewers" rejected the manuscript on the basis that it was filled with logical flaws and mis-representations of the published literature.


Elsevier says:

This Article-in-Press has been withdrawn pending the results of an investigation. The editorial policy of Medical Hypotheses makes it clear that the journal considers "radical, speculative, and non-mainstream scientific ideas", and articles will only be acceptable if they are "coherent and clearly expressed." However, we have received serious expressions of concern about the quality of this article, which contains highly controversial opinions about the causes of AIDS, opinions that could potentially be damaging to global public health. Concern has also been expressed that the article contains potentially libelous material. Given these important signals of concern, we judge it correct to investigate the circumstances in which this article came to be published online. When the investigation and review have been completed we will issue a further statement. Until that time, the article has been removed from all Elsevier databases. The Publisher apologizes for any inconvenience this may cause. The full Elsevier Policy on Article Withdrawal can be found at http://www.elsevier.com/locate/withdrawalpolicy.

The second example is a paper published in Proceedings of the National Academy of Sciences, amusingly known as PNAS. This is a venerable and respected journal, but it has a little-known wrinkle: members of the National Academy of Sciences are allowed to bypass formal peer review by "communicating" papers for other researchers. This is how the PNAS "Information for Authors" page describes the process:

An Academy member may “communicate” for others up to 2 manuscripts per year that are within the member's area of expertise. Before submission to PNAS, the member obtains reviews of the paper from at least 2 qualified referees, each from a different institution and not from the authors' or member's institutions. Referees should be asked to evaluate revised manuscripts to ensure that their concerns have been adequately addressed. The names and contact information, including e-mails, of referees who reviewed the paper, along with the reviews and the authors' response, must be included. Reviews must be submitted on the PNAS review form, and the identity of the referees must not be revealed to the authors. The member must include a brief statement endorsing publication in PNAS along with all of the referee reports received for each round of review. Members should follow National Science Foundation (NSF) guidelines to avoid conflict of interest between referees and authors (see Section iii). Members must verify that referees are free of conflicts of interest, or must disclose any conflicts and explain their choice of referees. These papers are published as “Communicated by" the responsible editor.
The paper in question is was submitted via this communication process. It was written by Donald Williamson, a retired academic from the University of Liverpool, and suggests that butterflies and caterpillars orginated as different species:

I reject the Darwinian assumption that larvae and their adults evolved from a single common ancestor. Rather I posit that, in animals that metamorphose, the basic types of larvae originated as adults of different lineages, i.e., larvae were transferred when, through hybridization, their genomes were acquired by distantly related animals.


The paper has been criticised on the basis that it contains no supporting data for what is, after all, a fairly extraordinary hypothesis. Not only that, but it turns out that it had previously been rejected by seven different journals.

In both Medical Hypotheses and PNAS, the defence seems to be that there needs to be some mechanism by which speculative ideas that go against current mainstream opinion can be presented and discussed. This seems fair enough, but is anything gained by publishing hypotheses that are not supported by any data, or papers that are logically flawed and contain mis-representations? In both these cases, it seems that the papers would not have been published had they been reviewed properly.