--- layout: post status: publish published: true title: What's "open" got to do with it? wordpress_id: 2904 wordpress_url: https://www.martineve.com/?p=2904 date: !binary |- MjAxMy0xMC0wMyAyMTowMjo1OCArMDIwMA== date_gmt: !binary |- MjAxMy0xMC0wMyAyMDowMjo1OCArMDIwMA== categories: - Technology - Open Access - Academia tags: - OA comments: [] ---
The below is a piece that I wrote for The Conversation in the state before it was edited for publication there. While the version published there captures better the sense of the sting article and the general background, I wanted to post my unadulterated version here as it shows my true thoughts for those already immersed in the debate.
As you read this, Twitter is no doubt aflame with debate over the sting article on open access journals published in Science. The article details John Bohannon's submission of over 300 bogus papers to open access journals listed in the DOAJ and on Beall's list of "predatory open access publishers". Problematically, 157 of those journals accepted the obviously erroneous manuscript that featured ethical approval problems and clear scientific anomalies. While he never explicitly frames his discourse in terms of comparison to the subscription model, Bohannon's essential hypothesis seems to be that open access driven by publication charges will be inherently biased towards acceptance.
On the surface this looks like a deadly blow to open access of the APC variety. However, I want to argue that there are some glaring problems with the article and its premises. In short, Bohannon's article isn't really about open access; it's about a more general breakdown in peer review and the ability to evaluate trust but he targets only open access here. Problems of this nature have been notably explored in closed journals as far back as 1982, albeit from the flip-side of journals rejecting their own resubmitted material. Overall, though, in a history rife with flawed review, there is really very little that is new in Science's sensationalist article.
First and foremost, Bohannon's methodology is dubious. While Bohannon went to great lengths to concoct the bogus methodology for submission and to ensure the flaws were credible, surely the key problem with the article's own investigation lies in two sentences, tucked away in Bohannon's coda. Bohannon writes: "Some say that the open-access model itself is not to blame for the poor quality control revealed by Science's investigation. If I had targeted the bottom tier of traditional, subscription-based journals, Roos told me, 'I strongly suspect you would get the same result.'" (See correction.) Well, yes, but that is an enormous side statement! While presenting a veneer of objectivity, the one-sided choice of targets here betrays the sub-vocalised disdain: "it would never happen in the subscription model". But, given this wasn't tested, how do we know?
However, Bohannon goes further and actually includes his own counter-data to undermine the hypothesis that nonetheless dominates the tone of the article. As is noted within the piece, PLOS ONE "was the only journal that called attention to the paper's potential ethical problems". This clear signal of excellent review practice by the foremost open access publisher surely outweighs many of this article's findings. Furthermore, several traditional publishers, including SAGE and Elsevier, accepted the piece. If publishers claim that they add value at the peer review stage then this should not have happened as one would assume a uniform review standard across their publications, closed or open. Given that this was the case, why did Science publish such a clearly incomplete study? More importantly, one might be led to wonder whether Science had some interest in trouncing open access. Certainly, that it neither subjected its own model to scrutiny nor pointed out the "predatory" nature of subscription publishing, which is bankrupting our academic libraries, does not help.
The harsh truth is that Bohannon's article is hostile. He submitted articles only to OA journals and by omission thereby erroneously links the failure of peer review to a single business model. While he acknowledges that the top players provided rigorous review, Bohannon still uses the findings of a large class of useless journals (that are, doubtless, predatory) to represent open access as a whole, as though all venues carried equal weight. Meanwhile, although Bohannon argues that "open access has multiplied that underclass of journals", I would like to counter that it is only through a history of masking editorial processes amid claims of "value added" that we have arrived at this mess.
To elaborate: review is a function of academic labour. Editorial decisions should be made by qualified and respected academics who run the journals and prestige should be based neither on the brand of the publisher nor the name of the journal. The sole markers must be the research turned out. Bogus journals who do not remove editors when they resign must be flagged up and shot down.
If we played by this model, not even the most gullible of authors would attempt submission to this underclass and such predatory behaviour would be next to impossible. While I do not know how many people are taken in by these journals (which are akin to email scams), as it is, we have instead built a system that cloaks the validating academic input and we have enabled journals to hide behind their history and commercial facades.
Finally, even if we do take seriously the case that there might be credible-looking but predatory OA journals out there, could we not find new economic models that remove this incentive to publish for cash? Were platforms collectively funded by libraries then not only would this behaviour be impossible, but we would see a very different environment emerging.
My thanks to Ernesto Priego for feedback on this article.
Correction
Subsequent to the authorship of this piece, Science has changed its quotation of David Roos. In the original, embargoed copy it read as above. David has subsequently written to me with this statement:
"One minor comment: as I have noted in a comment on The Conversation, and as now corrected on the Science web site, my guess is that they would have obtained similar results from subscription-based journals in general, not just "the bottom tier" -- a phrase I would never use, and have no idea of how to define."