- This post presents a template that I use as an editor for desk rejections of manuscripts reporting clinical trials showing clear evidence of questionable research practices such as spin and confirmation bias.
- This template can also be used by authors as a guide for writing a manuscript that avoids such immediate rejection.
- Alternatively, authors can reject my recommendations. They can instead use the template to devise a strategy to spin reporting of their trials with weak results for greater likelihood of getting published in a journal that isn’t concerned with confirmation bias or the reliability of findings.
What’s a desk rejection?
A desk rejection is a form letter that usually says nothing more than “I regret to inform you this manuscript is of insufficient interest to our readers. We nonetheless hope you will consider this journal for future submissions.”
Some journals reject over 40% of submissions in this manner, without anyone actually reading the manuscript, even the editor.
I like to provide a little bit more explanation when I send manuscripts back to authors that report clinical trials . My decision letter usually additionally states that I would be willing to receive a revision with more accurate and transparent reporting. Of course, I can’t guarantee acceptance because I cannot anticipate what form the revision will take and what other problems may be revealed with a closer look.
As a reviewer, I use the template to explain a recommendation of “reject” or “major revision.” I may actually read the manuscript, but my philosophy is that if authors aren’t going to be candid about their findings, I am not going to spend a lot of time reviewing a manuscript in need of major revisions.
With minimal modification, this template is broadly applicable. I realized that I had been saying the same things over and over again in decision letters and reviews. So I formalized the message as a template.
What authors need to know about editors and reviewer
We work for free. Authors don’t seem to understand that editors and reviewers largely work without compensation and so we have to make decisions about where we put our effort.
As an editor, I am conscious of the unpaid effort that reviewers put into evaluating a manuscript. It’s increasingly difficult to find reviewers. When I find someone willing to volunteer to do reviews, I don’t want to squander their time and effort .
Whether authors take the high or the low road is up to them
This template is useful for authors who want to avoid their manuscript being rejected without a thorough review. They can take away a sense of how they need to report results of their clinical trials, at least in the abstract. They can then make sure that the reporting and discussing of their results in the rest of the manuscript corresponds to this.
I wish that more authors submitting manuscripts reporting clinical trials adhered to these standards ahead of time so that we didn’t have to impose them afterwards. Or that they did not succeed in getting poorly reported clinical trials published.
But here is the dilemma:
Most randomized trials of psychotherapy are underpowered. We should not routinely expect significant results, even if the intervention would be found effective with a sufficiently large sample. So, an accurate and transparent manuscript is likely to report weak or null findings.
But editors of vanity journals like the Journal of Consulting and Clinical Psychology do not like honestly reported null findings. The journal has now constructed a special ghetto for them with the heading:
“Surprise, Surprise: Interesting Null Results and Failures to Replicate“
I find that heading more demeaning than funny. I would be comfortable with it only if there was a heading for a section reporting positive results:
“Warning! Many of the findings in this section are exaggerated or false.”
So, as I’ve said before, junior academics can take the high road or the low road to success, and often the rewards are more assured for the low road. We are trying to change that.
Using the template for a quick appraisal of psychotherapy studies
Citizen-scientists checking out the published psychotherapy literature can also use this template as a guide to detect quickly questionable research practices and likely exaggerated results.
You should be able to detect spin with a quick scan of the abstract and a glance through the manuscript.
Judging from the published psychotherapy literature, the standards that I am applying are largely ignored. That is one of the reasons for the untrustworthiness of the psychotherapy literature as a guide to clinical practice and public policy.
The values reflected in this template are my own and don’t necessarily reflect other academic editors at PLOS One. Even if this commitment not always reflected in the decisions of individual academic editors, PLOS One has announced its commitment to publishing all studies without serious flaws. Even studies with serious flaws should be published, if the flaws are transparently noted and the resulting manuscript represents a contribution to the literature, even if only a lesson learned about what future studies should avoid.
A few years ago, there was a deliberate reduction of the PLOS One journal impact factor (JIF) after a reiteration of the policy of not just accepting what is innovative and breakthrough. The JIF of was lowered from 4.4 in 2010 to a still respectable 3. In 2014. In psychology at least, publishing in PLOS provides as good a JIF has many of the leading journals, a consideration for those academics who face evaluation by administrators who are unduly impressed by JIFs.
I strongly believe that null trials should be published, but not with spin and obvious confirmation bias. As an editor, I will not send such manuscripts out to reviewers without revision. Reviewers are often fooled by the authors’ spin and I find myself having to disagree with their positive assessment. I’m willing to disagree with reviewers’ recommendations when they are unfairly harsh or unrealistically positive. But it’s a waste of their time and mine to have to contend with reviews overlooking obvious spinning, leaving to me to explain to the authors reasons for my disagreement with enthusiastic reviewers.
And now my template. I would welcome feedback and suggested revisions or elaborations. Or if you don’t like the template, I’d like to hear that too.
Results of many clinical trials never published, and there is a strong publication bias favoring positive findings. There is a consensus that this situation reduces the validity of integrative meta-analyses and systematic reviews, lending them a confirmation bias of unknown magnitude.
This report concerns a null trial comparing [X ] and [Y] to a [Z] control comparison group. The au8thors are to be applauded for including a control/comparison group with at least some comparability in terms of expectations and intensity of contact, although any assessment of these features was not reported.
It is important that results of null trials should be published out of respect for the patients’ commitment of time and assuming of burden to participating in an experiment involving randomization rather than simply receiving treatment. But it’s also important that any evaluation of such interventions be able to accommodate the full range of data potentially available. Whether further commitment of resources to investigating a particular treatment or whether it is ready for dissemination needs to be decided on all of the relevant data. Therefore the movement to publish all trials.
However, it s important that results be transparently and completely reported without undue confirmation bias and spin. In its present form, this manuscript exhibits considerable confirmation bias, starting with the skewed abstract
If this trial was registered, it should say so and provide details. If it were not registered, that should be also be stated. Registration provides demonstration of a precommitment to particular primary outcomes, hypotheses, and other methodological features before patient accrual began. I’m assuming this trial was not registered, and therefore we have to choose among N possible primary outcomes measured at a number of timepoints. For none of is evidence of a priority commitment. Perhaps the authors could make reference to primary outcomes in past trials, but there is in adequate attention to the existing literature
Reports of analyses of a three arm trials should begin whether there was an overall group difference. Only after that result is presented should post hoc exploratory analyses be conducted between treatments and within treatments over time. Any statistical significance claim for such analyses should be qualified by their being post hoc.
The results section of the abstract has an immediate focus on some variables within two of the groups. Readers are given no indication why these two particular within-group differences should be emphasized, particularly at the expense of a clear presentation of between-group differences or of results for other variables. Nor is there any reason for emphasizing what the investigators consider positive effects within these two groups.
Considering the number of variables and the number of time points at which they were assessed, any control for multiple comparisons leaves this trial clearly null. That should be emphasized the abstract and throughout.
There is inadequate attention to the existing literature. There are other trials claiming positive effects, but if they were subject to the same standards for reporting clinical trials as I am proposing for this one, they too would be seen as null. I recommend that the authors present results of these past trials, not accepting the confirmation bias of past authors, but transparently and with appropriate control for multiple comparisons.