Reviewers arise, we have only an untrustworthy psychotherapy literature to lose.
Psychotherapy researchers have considerable incentives to switch outcomes, hide data, and spin reports of trials to get published in prestigious journals, promote their treatments in workshops, and secures future funding. The questionable research practices that permeate psychotherapy research cannot be changed without first challenging the questionable publication practices that allow and encourage them.
Journals must be held responsible for the untrustworthiness of what they publish concerning the efficacy and effectiveness of psychotherapy. When journals publish unreliable findings, they are failing not only their readers, but patients, clinicians, and policymakers.
Yet there are institutional agendas supporting and encouraging the questionable research practices of psychotherapy researchers. Unreliable, but newsworthy reports of “breakthrough” findings contribute more early citations than honest, transparent reporting of findings that are inevitably more modest than the illusions that questionable research practices and poor reporting can create. Early citations of articles lead to higher impact factors, which, rightly or wrongly, are associated with more prestige and the ability to attract reports of more ambitious, better resourced trials, even if the reliability of the reports are in question.
Editors of journals often endorse responsible practices such as registration of trials, publishing of protocols, and CONSORT (Consolidated Standards for Reporting Clinical Trials), but do little to enforce these practices in request for revisions and editorial decisions.
Reviewers can nonetheless lead the reform of the psychotherapy literature by making their own stand for responsible reporting.
The burden of getting a better psychotherapy literature may depend on reviewers’ insistent efforts, particularly in the likelihood that journals for which they review are lax or inconsistent in enforcing standards, as they often are.
When reviewers are given no explicit encouragement from the journals, they should not be surprised when their recommendations are overruled or when they do not get further requests for reviews after holding authors to best practices. But reviewers can try anyway and decline further requests for reviews from journals that don’t enforce standards.
Recently I tried to track the progress of a psychotherapy trial from (a) its registration to (b) publishing of its protocol to (c) reporting of its outcomes in the peer-reviewed literature.
Reports of the trial had been reported in at least two articles. The first reportin Psychosomatic Medicine ignored the primary outcomes declared in the protocol.
Journal of Psychosomatic Research published another report that did not acknowledge registration, minimally cited the first paper without noting its results, and hid some important shortcomings of the trial.
Together, these two papers entered a range of effect sizes for the same trial into the literature. Neither article by itself indicating which should be considered the primary outcome and they compete for this claim. When well done, meta-analyses should be limited to a single effect size per study. Good luck to anyone undertaking the bewildering task of determining which of the outcomes, if any, of the reported in these two papers should be counted.
Overall, detecting the full range of problems in this trial and even definitively establishing that the two reports were from the same trial, took considerable effort. The article in JPR did not give details or any results of the first report of the trial in PM. Although the PM article and JPR claimed to adhere to CONSORT in its reporting, it provided no flow chart of participants moving from recruitment through follow-up. That flowchart was included in the PM article but not in JPR. Yet even in PM, the authors failed to discuss that the flowchart indicated substantial lower retention of patients randomized to treatment as usual (TAU). A reader also had to scrutinize the tables in both articles to recognize the degree to which substantial differences in baseline characteristics influenced the outcome of the trial and limited its interpretability. This was not acknowledged by the authors.
Overall, figuring out what happened in this trial took intense scrutiny, forensic attention to detail, and a certain clinical connoisseurship. Yet that is what we need to evaluate what contribution to the literature it provided, and with what important cautions because of its limitations.
There were shortcomings in the peer review of these two articles, but I don’t think that we can expect unpaid reviewers to give the kind of attention to detail that I gave in my blog. Yet, we can expect reviewers to notice basic details related to the trustworthiness of reporting of psychotherapy trials than they now typically do.
If reviewers don’t catch certain fundamental problems that may be hiding in plain sight, there are unlikely to be detected by subsequent readers of the published paper. It is notoriously difficult to correct errors once they are published. Retractions are almost nonexistent. APA journalssuch as Journal of Consulting and Clinical Psychology or Health Psychology that are the preferred outlets for many investigators publishing psychotherapy trials are quite averse to publishing critical letters to the editor
Anyone who has tried to published letters to the editor criticizing articles in these journals knows that editors set a high bar for even considering any criticism. Authors being criticized often get veto over what gets published about their work, either by being asked directly or by simply refusing to respond to the criticism. Some journals still hold to the policy that criticism cannot be published without a response from authors.
It also isn’t even clear that the authors of the original papers have to undergo peer review of their responses to critics. One gets doubts from the kinds of ad hominem attacks that are allowed from them and from authors’ general tendency to simply ignore the key points being made by critics. And authors get the last word, with usually only a single sequence of criticism and response allowed.
The solution to untrustworthy findings in the psychotherapy literature cannot depend on the existing, conventional system of post publication peer review for correction. Rather, something has to be done proactively to improve the prepublication peer review
A call to arms
If you are asked to review manuscripts reporting psychotherapy trials, I invite you to join the struggle for a more trustworthy literature. As a reviewer, you can insist that manuscripts clearly and prominently cite:
- Trial registration.
- Published study protocol.
- All previously published reports of outcomes.
- What are the reports might subsequently be in the works.
Author should provide clear statements in both the cover letter and the manuscript whether it is the flagship paper from the project reporting primary outcomes outcomes.
Reviewers should double check the manuscript against electronic bibliographic sources such as Google Scholar and Pub Med to see if other papers from the are not being reported. Google Scholar can often provide a way of identifying reports that don’t make it into the peer-reviewed literature as reported in Pub Med or have not yet made it to listings in Pub Med.
Checking is best done by entering the names of all authors into a search. It’s often the case that order of authors change between papers and authors are added or dropped. But presumably there will be some overlap.
Reviewers should check for the consistency between what is identified as outcomes in a manuscript being reviewed with what was registered and what was said in the published protocol. Inconsistencies should be expected, but reviewers should but insist these be resolved in what could be a major revision of the manuscript. Presumably, as a reviewer, you can’t make final recommendations for publication without this information been prominently available within the paper and you should encourage the editor to withhold judgment.
Reviewers should alert editors to incomplete or inaccurate reporting and consider recommending a decision of “major revisions” where they would otherwise be inclined to recommend “minor revisions” or outright acceptance.
It can be a thankless task to attempt to improve the reliability of what is published in the psychotherapy literature. Editors won’t always like you, because you are operating counter to their goal of getting newsworthy reports into their journals. But the next time, particularly if they disregard your critique, you can refuse to review for them and announce that you are doing so in the social media.
Update 1 (January 15, 2016 8:30 am): The current nontransparent system of prepublication peer review requires reviewers to keep confidential the review process and not identify themselves as having been involved after the fact. Yet, consistent with that agreement of confidentiality, reviewers are still free to comment on published papers. When they see that journals have ignored their recommendations and allowed the publication of untrustworthy reports of psychotherapy trials, what options do they have?
They can simply go to PubPeer and post a critique of the published trial without identifying themselves as having been a reviewer. If they are lucky, they will get a thread of post publication peer review commentary going that will influence the subsequent perception of the trial’s results. I strongly recommend this procedure. Of course, if they would like, disappointed reviewers can write a letter to the editor, but I’ve long been disillusioned with the effectiveness of that approach. Taking that route is likely to only leave them disappointed and frustrated.
Update (January 15, 2016 9:00 am):
While I was working on my last update, an announcement about the PRO appeared on Twitter. I reviewed it, signed on, and find its intent quite relevant to what I am advocating here. Please consider signing on yourself.
The Peer Reviewers’ Openness (PRO) Initiative is, at its core, a simple pledge: scientists who sign up to the initiative agree that, from January 1 2017, will not offer to comprehensively review, or recommend the publication of, any scientific research papers for which the data, materials and analysis code are not publicly available, or for which there is no clear reason as to why these things are not available. To date, over 200 scientists have signed the pledge.
Reviewers, just say no to journals and editors not supporting registration, transparent reporting, and, importantly the sharing of data required by readers motivated to reevaluate for themselves what is being presented to them.