A new Call for Papers establishes a place for failed replications and null findings in clinical psychology in an American Psychological Association journal. Unfortunately, the journal lacks an impact factor, despite the journal having been publishing for decades.
There are lots of reasons that establishing such a ghetto where failed replications and null findings can be herded and ignored is a bad idea. I provide nine. I’m sure there are more.
But the critical issue in the creation such ghettos is that they reduce pressure on the APA vanity journal, Journal of Consulting and Clinical Psychology to reform questionable publication practices and routinely accept replications and null findings.
Clinical psychology is different
- The untrustworthiness in clinical psychological science is serious, but different than that of personality and social psychology, and the crisis it poses requires different solutions.
- There is little harm to not been able to replicate personality and social psychology studies, beyond to the credibility of those fields and the investigators within them.
- However, untrustworthy findings in clinical psychology – whether they are exaggerated or simply false – can translate into ineffective and even harmful services being delivered, along with poor commitment of scarce resources to where they are needed less.
- Personality and social psychologists can look to organized mass replication efforts to assess the reproducibility of findings in their fields. However, such efforts are best undertaken with Internet-recruited and student samples using surveys and simple tasks.
- Mass replication efforts are less suitable for key areas of clinical psychology research, which often depends on expensive clinical trials with patients and extended follow-up. Of course, research and clinical psychology benefits from independent replication, but it is unlikely to occur on a mass basis.
Efforts to improve the trustworthiness of clinical psychology should have progressed more, but they have not.
Clinical psychology has greater contact than personality and social psychology with the biomedical literature, where untrustworthy findings can have more serious implications for health and mortality.
In response to repeated demonstrations of untrustworthy findings, medical journals have mandated reforms such as preregistration, CONSORT checklists for reporting, transparency of methods and results using supplements, declarations of conflicts of interest, and requirements for the routine sharing of data. Implementation of these reforms in medical journals is incomplete and enforcement is inconsistent, with clear signs of resistance from some prestigious journals. Note for instance, the editor of the New England Journal of Medicine warning that routine sharing of data from clinical trials would produce “research parasites” who would put the data to different purposes than intended by the original authors.
While many of these reforms have been nominally endorsed by specialty clinical psychology journals, they are largely ignored in the review and acceptance of manuscripts. For instance, a recent systematic review published in JCCP of randomized trials published in the most prestigious clinical psychology journals in 2013 identified 165 RCTs. Of them,
- 73 (44%) RCTs were registered.
- 25 (15%) were registered prospectively.
- Of registered RCTs, only 42 (58%) indicated registration status in the publication.
- Only 2 (1% of all trials) were registered prospectively and defined primary outcomes completely.
Apparently not only are investigators failing to register their trials, editors and reviewers ignore whether registration has occurred and don’t bother to check whether what is reported in a manuscript is inconsistent with what is proposed in a registration.
Questionable research practices in clinical psychology
The crisis in clinical psychological science lies in its evidence base:
- RCTs are underpowered, yet consistently obtain positive results by redefining the primary outcomes after results are known.
- Typical RCTs are small, methodologically flawed study conducted by investigators with strong allegiances to one of the treatments being evaluated.
- Treatment preferred by investigators are a better predictor of the outcome of RCTs than the specific treatment being evaluated.
Questionable publication practices in clinical psychology
Questionable research practices (QRPs) in clinical psychology are maintained and amplified by questionable publication practices (QPPs).
The premier psychology journal for publishing randomized trials is Journal of Consulting and Clinical Psychology. It is a vanity journal with a strong confirmation bias and a distinct aversion to publishing null findings and replications. Until recently, letters to the editor were not even allowed. When the ban was relaxed a few years ago, a high bar was set for accepting them. Statistics about the rate of acceptance of letters to the editor are not available, but accounts from colleagues suggest that criticisms of basic flaws in articles that have been published are suppressed. JCCP is not a journal hospitable to post-publication peer review.
Publication of flawed studies in JCCP go on detected and unannounced, except through alternative post publication peer review, outside the journal, such as PubMed Commons comments and blogging.
Although the term “Pink Floyd rejection” was originally developed by an outgoing editor of the Association for Psychological Science’s Psychological Science, it captures well the editorial practices of JCCP.
Call for Brief Reports: Null Results and Failures to Replicate
Journal of Psychotherapy Integration will start publishing a new recurring brief reports section titled, “Surprise, Surprise: Interesting Null Results and Failures to Replicate.”
In an era when findings from psychological science are called into question, it is especially important to publish carefully constructed studies that yield surprising null results and/or failures at replicating “known” effects.
The following 2012 article published in Journal of Psychotherapy Integration is a good example of a paper that would be appropriate for this section:
DeGeorge, J., & Constantino, M. (2012). Perceptions of analogue therapist empathy as a function of salient experience. Journal of Psychotherapy Integration, 22, 52-59.
Submitted manuscripts should not exceed 2500 words, including references. Manuscript should be submitted electronically through the journal’s submission portal under Instructions to Authors.
Please note in your cover letter that you are submitting for this brief reports section. We look forward to your submissions!
What’s wrong with this resting place for failures to replicate and null findings?
- Authors undertaking replications, regardless whether they succeed in confirming past findings, are entitled to a journal with an impact factor.
- The title Journal of Psychotherapy Integration adds nothing to electronic bibliographic searches because “psychotherapy integration” is not what failures to replicate and null findings necessarily represent. Locating particular articles in electronic bibliographic searches is often fortuitous. Readers’ decisions to click on a title to examine the abstract depend on their recognizing the relevance of the article from the title of the journal in which it is published.
- The title to this special section is demeaning. If it is a joke, it will soon wear thin.
- Failures to replicate and null findings are not necessarily “surprises” given the untrustworthiness of the clinical psychology literature.
- Reasons for the failure to replicate previously published clinical trials often lie in the conduct and reporting of the original studies themselves. Yet having been granted “peer-reviewed” status in a more prestigious journal, the original articles are automatically granted more credibility than the failure to replicate them.
- A word limit of 2500 is hardly adequate to describe methods and results, yet there is no provision for web-based supplements to present further details. The value in failures to replicate and null findings lies in part in the ability to make sense of the apparent discrepancy with past studies. Confining such papers to 2500 words reduces the likelihood that the discussion will be meaningful.
- The existence of such a ghetto to which these papers can be herded takes pressure off the vanity JCCP to reform its publication practices. Editors can perceive when studies are likely to be failed attempts at replications or null findings and issue desk rejections for manuscripts with a standard form letter suggesting resubmitting to the Journal of Psychotherapy Integration.
- Providing such a ghetto is APA’s alternative to acceptance of a Pottery Barn rule, whereby if JCCP publishes a clinical trial, it incurs an obligation to publish attempted replications, regardless of whether results are consistent with the study being replicated.
- Without journal reform, publication in JCCP represents a biased sampling of evidence for particular psychotherapies with a strong confirmation bias.
Clinical psychology doesn’t need such silliness
Initiatives such as this call for papers are a distraction from the urgent need to clean up the clinical psychology literature. We need to confront directly JCCP‘s policy of limiting publication to articles that are newsworthy and that claim to be innovative, at the expense of being robust and solid clinical psychological science.
Some personality and social psychologists involved in the replication initiative have received recognition and endorsement from the two professional organizations competing for the highest impact factors in psychology, Association for Psychological Science and American Psychological Association. Those of us who’ve continue to call in the social media for reform of the vanity journals, are often met with a flurry of negative response from the replicators who praise the professional organizations for their commitment to open psychological science
Have the replicators sold out the movement to reform psychology by leaving the vanity journals intact? As I’ve argued elsewhere, compromises worked out for replicability project may adversely affect efforts to improve the trustworthiness of clinical psychologicalscience, even if the stakes are higher.