Preprint reviews by Thomas Munro

Science with no fiction: measuring the veracity of scientific reports by citation analysis

Peter Grabitz, Yuri Lazebnik, Joshua Nicholson, Sean Rife

Review posted on 17th September 2017

The idea of validating this metric against the Reproducibility Project is excellent, but I think more examples are needed, especially examples where apparently reproducible claims later turned out to be wrong, as discussed by neuroskeptic. The "replication crisis" was after all the inspiration for the Reproducibility Project in the first place. A classic example is Millikan's incorrect value for the charge of the electron, whose apparently high reproducibility slowly declined over several decades:

https://hsm.stackexchange.c...

From your tree viewer, it appears that you have values for many other studies, which would strengthen the paper if added.

I like your idea of plotting the result over time. The trajectory may well give early warning of problems, as in the Millikan case.

Calculating this metric would be much less work using a search engine that delivers the sentence in which the citation appears, the "citation context". The Colil database does this, but only for a subset of Pubmed Central. It's a powerful proof of concept, though.

http://colil.dbcls.jp

Citeseerx seems to give the context for some citations, but not to a useful extent. The old Microsoft Academic Search used to offer this, but the new Microsoft Academic web interface does not; the citation contexts are only available via the API, which the typical researcher is not going to use. This is a tragic waste, since the new version has excellent coverage; I find it often detects citations that google scholar misses. If you approached them, this paper might help them see the value of that feature enough to re-implement it. Perhaps you could suggest a collaboration. The R-factor is much more likely to take off if it's easily calculated. I think it's worth mentioning these services in the paper to help people who'd like to try it.

As neuroskeptic pointed out, the name "R-factor" is likely to be confused with the R-index. I think a more descriptive name would help to avoid confusion, e.g. "Published replication rate" or quotient. Also, rather than use subscripts, which are not self-explanatory, it could be given as a fraction, e.g. "1/11(9%)".

show less


Can paid reviews promote scientific quality and offer novel career perspectives for young scientists?

Christian Wurzbacher , Hans-Peter Grossart, Erik Kristiansson , Henrik Nilsson and Martin Unterseher

Review posted on 17th January 2017

I strongly support your proposals in this preprint, and I think the suggested use of postdocs is inspired. It might reduce resistance, and provide a way for postdocs to remain current.

However, the preprint currently makes some important omissions. Several of the ideas you propose as new and hypothetical are already around, and indeed have been in use for decades. A much stronger case for them could be made by reviewing the evidence on this, and previous theoretical arguments.

1) What you term the initial APC already exists: the submission fee. These have been charged by dozens of journals since the early 1970s, mainly in economics, but also in a few biomedical journals. There is a substantial literature on submission fees and their effects. I give a brief overview, and 17 references, in a comment on pp. 82-84 of Solomon 2016.

2) Your proposal to use submission fees to pay reviewers has also been done successfully. Several journals with high submission fees do this, such as the Journal of Financial Economics.

3) Your prediction that submission fees "will lead to fewer, but better, submissions" is strongly supported by the literature, both theoretical and empirical (see my same comment). It could be made much more persuasive, and even roughly quantified, by adding some references.

On all these points, the preprint would probably benefit if you could attract an editor with long experience of submission fees as coauthor. I suspect some of them would enjoy the chance to tout their successes.

On your proposal to use submission fees for high acceptance rate journals, I think the fast-track experiment by Scientific Reports ($750 submission fee) suggests caution. This attracted 25 submissions in one month (Jackson, 2015), but also strong opposition, with some editors resigning, and was discontinued. Given that submission fees (and even fast-track fees of over $1,000) have been successful in selective, prestigious journals, these would probably be a safer choice initially.

Jackson, A. (2015). Fast-track peer review experiment: First findings.

blogs.nature.com/ofschemesandm...

Solomon, D. J., Laakso, M., & Björk, B.-C. (2016). Converting Scholarly Journals to Open Access: A Review of Approaches and Experiences.
dash.harvard.edu/handle/1/2780...

show less