Sunday, 6 October 2013

Are the problems with peer review anything to do with Open Access?

I just read a dreadful article by John Bohannon about dubious scientific practice that, itself, contained a pretty basic logical/scientific error.

Bohannon explains that he conducted a "sting" operation where he sent a set of spoof papers, containing deliberate and obvious flaws, to a large number of Open Access journals and 157 of the 304 journals accepted the article (he goes on to use a few selective variants to make the percentages look bigger, but let's agree that over 50% failure to detect the spoof article is already quite a bad record).

All well and good so far. We already know that peer-review is flawed (a google search for problems with peer review raises 39 million hits) due partly to the fact that many journals are run as profit-making entities. This data highlights that nicely. Now, the problem lies in Bohannon's implicit claim that this is caused by open access. He makes this claim carefully with phrases like, "reveals little or no scrutiny at many open-access journals" and says that his data raise, "questions about peer-review practices in much of the open-access world". These phrases are designed carefully to put in your mind that the problem is with open access, but note that he doesn't explicitly state that. He can claim afterwards that he only specified open access because that's the only domain for which he has data. But let's be clear, he intends to imply that open access is the cause.

Probably many of you have already spotted the issue here. Most of the comments following the article point it out. Certainly Bohannon himself understood it, which is why the digs highlighted above are phrased so carefully. But let's just spell it out with a study of analogous logic. If you were to run a study showing that nearly all men have ears, you don't get to claim afterwards that ears seem to be a feature of being male. You don't even get to imply that result from your data. To say anything about whether ears are a male-specific feature you also have to measure their occurrence in non-males. You might be right - ears might turn out to be a male feature - and when you have data to show that their occurrence in women is less common then feel free to discuss that. But not before then.

This is a basic scientific error, and a man claiming to be identifying problems in our scientific culture should be embarrassed to be making such deliberately misleading statements.

See also:
Michael Eissen, pointing out that Science Magazine also has also made some pretty shocking errors in their peer review
Jeroen Bosman's post, including links to many others

Friday, 5 July 2013

Publish your scientific materials when you publish your paper

Last week, for the first time, I published a paper for which I also uploaded all the electronic materials needed to replicate the study (for a moderately experienced vision scientist). You can now read about the fact that not only motion causes the silencing illusion of Suchow and Alvarez (I know you'll all be fascinated). But then you can also download the PsychoPy scripts to run the study, as well as analyse the original data and generate the plots. It may be surprising to the non-scientists out there that this is newsworthy but, in fact, almost nobody does this yet. I know! Unbelievable, right? Although scientists are mostly not shy about their findings, most are very shy about providing all the guts of their research, warts and all.

Some time ago I posted on the idea that we could do with an easy-to-use repository to which we could upload materials from experiments in psychology. There are numerous benefits for science in general, expanded on in the post above; we can create direct replications of other studies, we can spend more time thinking about scientific issues and less time rewriting basic stimulus code. I didn't express in that post that there's also a massive benefit for the publishing scientist, who is shy and territorial and doesn't want to lose the "competitive advantage". The perceived competitive advantage is that in this experimental topic (s)he's already written experiment code that nobody else has. But I believe it's massively outweighed by the benefit that when we publish our code we encourage more people to read our work and base their study on ours (to a geeky scientist that's the biggest complement you can pay). Let me put it another way: if your study was easy to program then somebody else can do it in no time (so no loss in giving them the code) and if the study was hard to program then they might never run the extension of it (so it's really important to give them the code so that they do).

Happily, unbeknownst to me when I wrote that post, the necessary repository was being created and you can now join up and use it at OpenScienceFramework.org. It's free, it's easy, it's permanent and it's easy on the eye. But it provides some great additional features. You can use it, before publishing your work, to share materials with your collaborators in a secure (private) repository and makes the materials available publicly later if you wish. The repository then has version control built-in so you can track changes to the materials, without needing to know about the underlying technologies. You can also use OpenScienceFramework to register in advance (again, privately) your intention to conduct a study and the expected outcomes in order to demonstrate which of your conclusions/analyses stem from genuinely a priori hypotheses.

Basically, this is a great resource that behavioural scientists should all be looking seriously. Many thanks to Brian Nosek, Jeff Spies and the rest of the OpenScienceFramework team. It is still in beta, so the creators are still taking feedback and adding features.

I'll be writing all my experiment code from now with the expectation that it will be published on OSF, and therefore writing it carefully with clearer-than-ever notes. For me this recent study is the first of (hopefully) many where you will be able to download the entire set of materials and data for the publication.