Field of Science

Is Peer Review Broken?

No.




Oh, you wanted more than that? Maybe some nuanced reflection on the issue of peer review? Ok, I'll give it a go, but I won't do nuance. Nuance died when Trump was elected, actually before that, but talk about nail in a coffin.


There have been many stories floating around for years on the failings of peer review (the vetting of scientific studies by other expert scientists in the field prior to publication). These stories usually follow the publication of some study that is fundamentally flawed or unacceptable for any number of reasons. Several examples come to mind Arsenic bacteria, Cold fusionCaterpillar hybridization, etc. So questions 'Is Peer Review Broken? and If so, How Do We Fix It? come up.

For those who live in the scientific universe you can skip the blue paragraphs, otherwise if you want a short breakdown of the process feel free to read them.

Once a group of scientists have made observations and gathered data, they write a story (aka scientific manuscript). I want to note this manuscript is a story, not in a fiction story sense, but in a narrative sense. The authors may not describe the experiments in the order they were conducted, because it makes a more logical narrative to describe things out of order. The authors may use their 20-20 vision to redescribe why an experiment was carried out, because at the time of writing the manuscript the original reason may not make sense in light of the narrative. Again, for those just looking to find reasons to disparage science,  I'm not suggesting authors are manipulating data or trying to obscure their findings (although there are cases of this), I'm talking about making a compelling argument to convince a skeptical audience of experts that their interpretations of the data (aka conclusions) are correct.

Ok, once the manuscript is written revised and edited until most if not all the authors hate the thing, it is submitted to a scientific journal for publication. The journal then assigns the paper to an editor who then decides if the paper is of sufficient rigor and interest to the readers to actually get peer reviewed. If the paper passes this hurdle, the editor sends a number of requests to other scientists asking them to evaluate the manuscript, including the experimental approaches and interpretations of the conclusions. (There are some variations on this model, but most journals follow this model. Some have additional levels of scrutiny, but this tends to be early in the process and not by current scientists.) Many potential reviewers decline and the editor repeats sending out requests until, usually, at least 3 agree. The 3 reviewers then read and critique the manuscript and provide feedback to the authors and editor on the pros and cons of the manuscript, this is the PEER REVIEW component. At this point the editor makes a decision on the manuscript which ranges from (rarely) acceptance, editorial revisions, more experiments needed revisions, to outright rejection. The most common response is some kind of revision, either with or without more experiments, and the authors deal with those critiques and resubmit a revised manuscript. (In the case of outright rejection, the authors usually revise the manuscript based on the review comments and send the manuscript to another journal (They revise because the same reviewers are likely to see it.)) The manuscript can either be accepted/rejected by the editor or (most commonly) sent out for re-review


Peer review, much like everything humans do, is a human endeavor. So it is subject to human limitations. This is not new or particular special, it's simply a fact that humans are not robots, have biases, and some even have ulterior motives. In a simplistic sense we can say peer review is broken, because people will and do make mistakes at all the levels described above. However, by this criteria peer review has always been broken and always will be broken. But this is a stupid criteria. Let's go through how this process can fail, because once we know how it can fail, we can make recommendations for how to fix it.

1. Poor editors

This works for and against a manuscript. For you: The editor thinks your shit don't stink. They can send out your manuscript, when they wouldn't send out the same manuscript from a different group. They can pick reviewers they know to be 'easy' or can write the invitation letter in such a way to encourage a positive review. How can this latter event happen? An editor, who is an established scientist can send to one of their former graduate students/post-docs who is now an independent scientist the following invitation,
Dear Prof X, I have this manuscript I think would be a perfect fit for journal Y, do you have time to review it? Abstract attached below.
Do you see how the letter could effect the review? You have extreme cases like Lynn Margulis obtaining numerous reviews until she had 3 she could use to accept an atrocious paper in PNAS (ignoring all the reviews that noted the fundamental errors in the paper).

2. Poor reviewers

I've been an editor for several journals. I can give you two easy reasons why you might get a poor review. First, you, as an editor, may not be an expert in the area the manuscript addresses. You might be generally aware of the area, but are certainly no expert, which means you likely do not know who the experts are in that field (because its not your field). You can do PubMed searches to identify people who have published in specific areas, but you don't know them or their research. So an editor may not be obtaining 3 rigorous expert reviewers.
Second, even in an area of expertise, the researchers you know are experts often say no when asked. This is particularly true if the journal isn't one of the top journals out there. There is little prestige saying you reviewed papers for a general journal like PLoS ONE in your annual reviews compared to Science or Nature.

3. Poor journals

There are also two versions of this. First, there are journals that will publish anything if you pay, its their business model. You can check out Beall's predatory journal list to identify many of these. Second, there are top-tiered journals that care about mass media dissemination of the work published there, also their business model (Science, Nature, I'm looking at you). The Arsenic bacteria was published in one of these journals, as were the ENCODE papers. This is not simply a journal issue as there were problems at all levels, but the journals actively advertised this work.

So how do we fix peer review?

Most of the discussion I've seen has been dealing simply with reviewers, which I think is the least broken aspect of the peer review process. (It's like blaming teachers for poor student performance and ignoring income inequality and poverty.) The solutions I've seen generally revolve around identifying reviewers, which is stone cold fucking stupid. I've obtained jackass reviews on publications and wish I knew who the dumbass was, but that's kind of the point of anonymous reviewers. If I disagree with a reviewer I cannot subconsciously or consciously screw them over on one of their papers or (more importantly) grant proposals in the future. Non-anonymous reviewers means no early career scientists will review papers for fear of career suicide or early career scientists will review papers and be favorable in the hope for favorable reviews in turn going forward. Non-anonymous peer review would essentially end all the good things of peer review and solve exactly 0 problems of peer review.

One idea I have seen floated around is publishing the reviews, which I actually support. The reviewer remains anonymous, but also has to take some ownership of their review. This could reduce what I think have been some bullshit critiques. While the reviewer would remain anonymous, the community could see what the issues were and decide if those were reasonable (and reasonably dealt with by the authors) or unreasonable and the community could actually comment on it (because the age of social media has changed things profoundly).

How can we fix things? I like the publishing reviews along with the articles (make them available online). BioRxiv may help with this as authors can post their original manuscripts that the world can see to compare reviewer critiques to. I personally like the idea of paying reviewers. $50 a review, not enough to cover the cost of the review but to provide some incentive. (I expect I spend 4 hours on every paper I review rigorously, because I check the literature, so not even minimum wage in some states. Some papers are so bad, they can be reviewed in an hour or two, but I wonder why the editor sent the paper out (see below).) You review 6 papers a year, which is pretty low in my experience, you make an extra $300 bucks, which is not nothing. If you suck at reviewing, editors stop asking you which has a financial consequence. I can see the argument that the money could allow systemic abuse, where reviewers want to appease the editors so they get more assignments, but this is essentially the amount I could make mowing a couple of lawns on a summer evening, probably in less time. (FYI, many journals make good money on the backs of free reviewers and free editors.)

One idea I have not seen floated around is to make editors more accountable. Too often in my submitted manuscripts (and those manuscript I have reviewed), the editor simply defers to the reviews and takes no responsibility. If a reviewer asks for an additional experiment, it must be done even if it has no effect on the conclusions made in the manuscript. In too many cases the editors simply pass information between the reviewers and authors.

How can we fix this? One idea is make editors more accountable. First, pay them. Say $3000 a year, this is essentially the cost of one article. FYI the authors pay to publish their work in the journal. If an editor is not doing a good job, boot them and take on another one. Screw it, hire professional editors, PhD scientists, for $90,000 a year and have them cover a research area. 30 articles covers their salary (another 10-15 for benefits). How many Nature papers are biology related every week?!?! Maybe the CEOs make a little less in order to support high quality science?

What about the journals? Well, authors should stop fighting to get their shit in glamour mags. I know scientists are under immense pressure to publish in C/N/S journals (Cell, Nature, Science), but do these journals really publish the best of the best? Don't know, what I do know is that any good study published in an open access journal is available to everyone, every-fucking-one-with-an-internet-connection!!! How many people actually peruse journals anymore as opposed to PubMed searches? I still subscribe to Science and/or Nature, but primarily for the news, reviews, and opinion pieces as well as to support their policy and outreach initiatives. If you are doing quality work, it will be read, because google. If I can find a decent Chinese restaurant in Rome online, I can find interesting articles on phenotypic diversity in microorganisms online. I would point out the Noble Prize winning research on B-cells (the antibody producing cells of the body) was published in the fucking Journal of Poultry Science.

In summary, peer review is a human endeavor and subject to human foibles. Is it perfect? No. Can it be improved? Marginally. Is it the best we have? Absolutely, but with the caveat that minor improvements can be made and the acceptance that there is no such thing as perfection, simply the ongoing striving for perfection.

2 comments:

David Williams, UK said...

The solutions I've seen generally revolve around identifying reviewers, which is stone cold fucking stupid . . . If I disagree with a reviewer I cannot subconsciously or consciously screw them over on one of their papers or (more importantly) grant proposals in the future.

But if reviewers are named, and you as a future reviewer attempt to screw them over, you are damaging your own reputation. You would not be able to hide behind the anonymity that reviewers have now.

Non-anonymous reviewers means no early career scientists will review papers for fear of career suicide or early career scientists will review papers and be favorable in the hope for favorable reviews in turn going forward.

Again, the whole notion of naming reviewers means "review[ing] papers and be[ing] favorable" will be strongly inhibited. The current state of affairs means subconscious sycophancy goes unchecked because the review itself will never be scrutinised. It will disappear into the void. Meanwhile the dodgy paper from the eminent scientist at the prestigious institution will sail into the big journal for reasons forever unknown. If the reviewer knows the wider community can check on their performance and biases as a reviewer, they might make a conscious effort to keep it objective (I know they are meant to do this it would be nice to know they are doing this).

One idea I have seen floated around is publishing the reviews, which I actually support. The reviewer remains anonymous, but also has to take some ownership of their review.

How can they take any ownership if they are anonymous?

How about double blind during review: reviewer doesn't know who they are reviewing, authors don't know who is the reviewer . . . . but then all exchanges in the review process are published as an appendix, with names, if the paper is accepted and published. This way biases are mitigated during review (both for and against. BTW some reviewers might even be racist) and any petty sniping, unreasonable demands or fawning are laid bare at the end - no anonymity to hide behind.

The Lorax said...

I don't think you understand how power differentials work.

They take 'ownership' because most people I know, don't want to look stupid. If the review is published and the community of scholars notes its problems, that is embarrassing for the reviewer (even if its anonymous). I always read the other reviewer comments after I review a paper to see if I missed something important either good or bad.