Overcoming perverse disincentives to honest, transparent reporting of science

I was impressed by a Student Blog posted  by Ulrike Träger in PLOS’ Early Career Research Community  0000-0002-4493-1136

Reforming scientific publishing to value integrity

I personally recommend that my readers follow that blog. It consistently has exceedingly fresh and interesting material, and the early career bloggers need to be recognized and encouraged for what they are doing. They are also often more in touch with recent developments that are being missed by more established investigators busy doing things the old way. See for instance –capture1

http://blogs.plos.org/thestudentblog/2016/09/30/some-thoughts-on-methodological-terrorism/

capture2http://blogs.plos.org/thestudentblog/2016/08/09/social-media-for-ecrs-serious-scientists-can-and-do-use-twitter/

Investigators in my cohort have handed early career investigators a demoralizing mess. They need our support in cleaning it up and fixing how science is produced and disseminated and corrected.

I’ve taken the liberty of singling out a couple of passages and provided minimal commentary. If you like the sampling, go to the original blog post  for more.

 We need to deal with honest, transparently reported science being considered boring, even if it proves more durable and dependable. Solid science is built with what might be seen as a lot of just bricks in the wall .

A study published in PLOS Biology that investigates on what factors scientists’ reputations are judged.

When comparing a scientist that produces boring, but reproducible studies with a scientist that publishes exciting, but not reproducible studies, the public perceived the boring scientist as “smarter, more ethical, a better scientist, more typical, and more likely to get and keep a job.” Scientists given the same choices agreed that the boring, but certain scientist was smarter, more ethical and the better scientist. But in a departure from the public’s opinion, scientists found the exciting but unreliable scientist to be more likely to get a job and be more celebrated by peers. This is a stark contrast to the public’s view of science, which seems to favor well-done science over flawed science. Worryingly, when scientists were asked which of the two model scientists they would rather be, more said they wanted to be a scientist that produces exciting results, even though the majority knew that publishing reproducible research is better overall. While one survey of 313 researchers does not represent the whole science community, these results paint a surprising picture of scientists’ priorities.

We need to stop worshiping the journal impact factor and wean ourselves from pay-walled journals. We need-

Valuing science based on scrutiny from an open access community. A lot of scientific journals only let paying subscribers view their publications, which limits exposure to research published in these journals. Open access policies allow anyone interested in a study to access the research, without barriers. More importantly, not just the access should be free but also the peer review process.

We need to give post publication peer review greater incentive and link it inseparably and directly to the already published papers being reviewed.

Peer review before publication is a key step in checking the quality of science, however the current peer review system is imperfect. I believe that post-publication peer review should be a key process to improve science integrity. Ideally both pre- and post-publication peer review would be made available alongside the published manuscript for increased transparency in the scientific process. A few publications have introduced open reviews including EMBO, BMJ Open and F100research. Alternatively, you can find online journal clubs like PubPeer where articles are discussed post-publishing, or leave comments on articles post-publication.

Or organize your own PubMed Commons journal club.

missing-pieceWe need to do a better job of making negative and inconclusive data widely available.

Currently, a lot of sound science remains unpublished, as negative or inconclusive data are less likely to be published due to reporting bias. A 2010 study in PLOS ONE showed that 82% of papers published between 2000 and 2007 in the United States included positive results only, in spite of the value of negative data. By publishing negative or null results the scientific literature captures a more complete picture of a particular field, and includes more balanced information. I feel a well-done study with negative results deserves the same recognition as a positive one, as it still expands human knowledge and saves resources for other researchers. For example, publishing what isn’t the cause of a given disease will prevent other scientists from spending time and money looking into the same thing. The PLOS Missing Pieces Collection includes negative, null or inconclusive results, and is a great platform for scientists who conduct an experiment and yield a result of this type. In addition, PLOS ONE is a journal that does publish negative, null, or inconclusive results. Replication studies also receive limited recognition in spite of their importance to advancing the scientific field. They are key in validating scientific findings, but few scientists risk doing them as it is hard to publish them for their “lack of innovation,” – a notion we should start to forget.

See also

What is open access?

What is open peer review?