NOTE: Additional documentation and supplementary links and commentary are available at What We Need to Do to Redeem Psychotherapy Research.
Fueling Change in Psychotherapy Research with Greater Scrutiny and Public Accountability
John Ioannidis’s declarations that most positive findings are false and that most breakthrough discoveries are exaggerated or fail to replicate apply have as much to with psychotherapy as they do with biomedicine.
We should take a few tips from Ben Goldacre’s Bad Pharma and clean up the psychotherapy literature, paralleling what is being accomplished with pharmaceutical trials. Sure, much remains to be done to ensure the quality and transparency of drug studies and to get all of the data into public view. But the psychotherapy literature lags far behind and is far less reliable than the pharmaceutical literature.
As it now stands, the psychotherapy literature does not provide a dependable guide to policy makers, clinicians, and consumers attempting to assess the relative costs and benefits of choosing a particular therapy over others. If such stakeholders uncritically depend upon the psychotherapy literature to evaluate the evidence-supported status of treatments, they will be confused or misled.
Psychotherapy research is scandalously bad.
Many RCTs are underpowered, yet consistently obtain positive results by redefining the primary outcomes after results are known. The typical RCT is a small, methodologically flawed study conducted by investigators with strong allegiances to one of the treatments being evaluated. Which treatment is preferred by investigators is a better predictor of the outcome of the trial than the specific treatment being evaluated.
Many positive findings are created by spinning a combination of confirmatory bias, flexible rules of design, data analysis and reporting and significance chasing.
Many studies considered positive, including those that become highly cited, are basically null trials for which results for the primary outcome are ignored, and post-hoc analysis of secondary outcomes and subgroup analyses are emphasized. Spin starts in abstracts and results that are reported there are almost always positive.
The bulk of psychotherapy RCTs involve comparisons between a single active treatment and an inactive or neutral control group such as wait list, no treatment, or “routine care” which is typically left undefined but in which exposure to treatment of adequate quality and intensity is not assured. At best these studies can tell us whether a treatment is better than doing nothing at all or than patients expecting treatment because they have enrolled in a trial and not getting it (nocebo).
Meta-analyses of psychotherapy often do not qualify conclusions by grade of evidence, ignore clinical and statistical heterogeneity, inadequately address investigator allegiance, downplay the domination by small trials with statistically improbable rates of positive findings, and ignore the extent to which positive effect sizes occur mainly in comparisons between active and inactive treatments.
Meta-analyses of psychotherapies are strongly biased toward concluding that treatments work, especially when conducted by those who have undeclared conflicts of interest, including developers and promoters of treatments that stand to gain financially from their branding as “evidence-supported.”
Overall, meta-analyses too heavily depend on underpowered, flawed studies conducted by investigators with strong allegiances to a particular treatment or to finding that psychotherapy is in general efficacious. When controls are introduced for risk of bias or investigator allegiance, affects greatly diminish or even disappear.
Conflicts of interest associated with authors having substantial financial benefits at stake are rarely disclosed in the studies that are reviewed or the meta-analyses themselves.
Designations of Treatments as Evidence-Supported
There are low thresholds for professional groups such as the American Psychological Association Division 12 or governmental organizations such as the US Substance Abuse and Mental Health Services Administration (SAMHSA) declaring treatments to be “evidence-supported.” Seldom are any treatments deemed ineffective or harmful by these groups.
Professional groups have conflicts of interest in wanting their members to be able to claim the treatments they practice are evidence-supported, while not wanting to restrict practitioner choice with labels of treatment as ineffective. Other sources of evaluation like SAMHSA depend heavily and uncritically on what promoters of particular psychotherapies submit in applications for “evidence supported status.”
The possibility that there are no consistent differences among standardized, credible treatments across clinical problems is routinely ridiculed as the “dodo bird verdict” and rejected without systematic consideration of the literature for particular clinical problems. Yes, some studies find differences between two active, credible treatments in the absence of clear investigator allegiance, but these are unusual.
The Scam of Continuing Education Credit
Requirements that therapists obtain continuing education credit are intended to protect consumers from outdated, ineffective treatments. There is inadequate oversight of the scientific quality of what is offered. Bogus treatments are promoted with pseudoscientific claims. Organizations like the American Psychological Association (APA) prohibit groups of their members making statements protesting the quality of what is being offered and APA continues to allow CE for bogus and unproven treatments like thought field therapy and somatic experiencing.
Providing opportunities for continuing education credit is a lucrative business for both accrediting agencies and sponsors. In the competitive world of workshops and trainings, entertainment value trumps evidence. Training in delivery of manualized evidence-supported treatments has little appeal when alternative trainings emphasize patient testimonials and dramatic displays of sudden therapeutic gain in carefully edited videotapes, often with actors rather than actual patients.
Branding treatments as evidence supported is used to advertise workshops and trainings in which the particular crowd-pleasing interventions that are presented are not evidence supported.
Those who attend Acceptance and Commitment (ACT) workshops may see videotapes where the presenter cries with patients, recalling his own childhood. They should ask themselves: “Entertaining, moving perhaps, but is this an evidence supported technique?
Psychotherapies with some support from evidence are advocated for conditions for which there is no evidence for their efficacy. What would be disallowed as “off label applications” for pharmaceuticals is routinely accepted in psychotherapy workshops.
We Know We Can Do Better
Psychotherapy research has achieved considerable sophistication in design, analyses, and strategies to compensate for missing data and elucidate mechanisms of change.
Psychotherapy research lags behind pharmaceutical research, but nonetheless has recommendations and requirements for trial preregistration, including specification of primary outcomes; completion of CONSORT checklists to ensure basic details of trials are reported; preregistration of meta-analyses and systematic reviews at sites like PROSPERO, as well as completion of the PRISMA checklist for adequacy of reporting of meta-analyses and systematic reviews.
Departures from preregistered protocols in published reports of RCTs are common, and there is little checking of discrepancies in abstracts from results that were actually obtained or promised in preregistration by authors. There is inconsistent and incomplete adherence to these requirements. There is little likelihood that noncompliant authors will held accountable and high incentive to report positive findings in order for a study is to be published in a prestigious journal such as the APA’s Journal of Consulting and Clinical Psychology (JCCP). Examining the abstracts of papers published in JCCP gives the impression that trials are almost always positive, even when seriously underpowered.
Psychotherapy research is conducted and evaluated within a club, a mutual admiration society in which members are careful not to disparage others’ results or enforce standards that they themselves might want relaxed when it comes to publishing their own research. There are rivalries between tribes like psychodynamic therapy and cognitive behavior therapy, but suppression of criticism within the tribes and in strenuous efforts to create the appearance that members of the tribes only do what works.
Reform from Without
Journals and their editors have often resisted changes such as adoption of CONSORT, structured abstracts, and preregistration of trials. The Communications and Publications Baord of the American Psychological Association made APA one of the last major holdout publishers to endorse CONSORT and initially provided an escape clause that CONSORT only applied to articles explicitly labeled as a randomized trial. The board also blocked a push by the Editor of Health Psychology for structured abstracts that reliably reported details needed to evaluate what had actually been done in trials and the results were obtained. In both instances, the committee was most concerned about the implications for the major outlet for clinical trials among its journals, Journal of Consulting and Clinical Psychology.
Although generally not an outlet for psychotherapy trials, the journals of the Association for Psychological Science (APS) show signs of even being worse offenders in terms of ignoring standards and commitment to confirmatory bias. For instance, it takes a reader a great deal of probing to discover that a high-profile paper of Barbara Fredrickson in Psychological Science was actually a randomized trial and further detective work to discover that it was a null trial. There is no sign that a CONSORT checklist was ever filed the study. And despite Frederickson using the spun Psychological Science trial report to promote her workshops, there is no conflict of interest declared.
The new APS Clinical Psychological Science show signs of even more selective publication and confirmatory bias than APA journals, producing newsworthy articles, to the exclusion of null and modest findings. There will undoubtedly be a struggle between APS and APA clinical journals for top position in the hierarchy publishing only papers that that are attention grabbing, even if flawed, while leaving to other journals that are considered less prestigious, the publishing of negative trials and failed replications.
If there is to be reform, pressures must come from outside the field of psychotherapy, from those without vested interest in promoting particular treatments or the treatments offered by members of professional organizations. Pressures must come from skeptical external review by consumers and policymakers equipped to understand the games that psychotherapy researchers play in creating the appearance that all treatments work, but the dodo bird is dead.
Specific journals are reluctant to publish criticism of their publishing practices. If we at first cannot gain publication in the offending journals of our concerns, we can rely on blogs and Twitter to call out editors and demand explanations of lapses in peer review and upholding of quality.
We need to raise stakeholders’ levels of skepticism, disseminate critical appraisal skills widely and provide for their application in evaluating exaggerated claim and methodological flaws in articles published in prestigious, high impact journals. Bad science in the evaluation of psychotherapy must be recognized as the current norm, not an anomaly.
We could get far by enforcing rules that we already have.
We need to continually expose journals’ failures to enforce rules about preregistration, disclosure of conflicts of interest, and discrepancies between published clinical trials and their preregistration.
There are too many blatant examples of investigators failing to deliver what they promised in the preregistration, registering after trials have started to accrue patients, and reviewers apparently not ever checking if the primary outcomes and analyses promised in trial registration are actually delivered.
- Require an explicit statement of whether the trial has been registered and where.
- Insist that reviewers consult trial registration, including modifications, and comment on any deviation.
- Explicitly label registration dated after patient accrual has started.
CONSORT for abstracts should be disseminated and enforced. A lot of hype and misrepresentation in the media starts with authors’ own spin in the abstract . Editors should insist that main analyses for the preregistered primary outcome be presented in the abstract and highlighted in any interpretation of results.
No more should underpowered in exploratory pilot feasibility studies be passed off as RCTs when they achieve positive results. An orderly sequence of treatment development should occur before conducting what are essentially Phase 3 randomized trials.
Here as elsewhere in reforming psychotherapy research, there is something to be learned from drug trials. A process of intervention development ought to include establishing the feasibility and basic parameters of clinical trials needs to proceed phase 3 randomized trials, but cannot be expected to become phase 3 or to provide effect sizes for the purposes of demonstrating efficacy or comparison to other treatments.
Use of wait list, no treatment, and ill-defined routine care should be discouraged as control groups. For clinical conditions for which there are well-established treatments, head-to-head comparisons should be conducted, as well as including control groups that might elucidate mechanism. A key example of the latter would be structured, supportive therapy that controls for attention and positive expectation. There is little to be gained by further accumulation of studies in which the efficacy of the preferred treatment is assured by comparison to a lamed control group that lacks any conceivable element of affective care.
Evaluations of treatment effects should take into account prior probabilities suggested by the larger literature concerning comparisons between two active, credible treatments. The well-studied treatment of depression literature suggests some parameters: effect size is associated with a treatment are greatly reduced when comparisons are restricted to credible, active treatments; better quality studies; and controls are introduced for investigator allegiance. It is unlikely that initial claims about a breakthrough treatment exceeding the efficacy of existing treatments will be sustained in larger studies conducted by investigators independent of developers and promoters.
Disclosure of conflict of interest should be enforced and nondisclosure identified in correction statements and further penalized. Investigator allegiance should be considered in assessing risk of bias.
Developers of treatments and persons with significant financial gain from a treatment being declared “evidence-supported” should be discouraged from conducting meta-analyses of their own treatments.
Trials should be conducted with sample sizes adequate to detect at least moderate effects. When positive findings from underpowered studies are published, readers scrutinize the literature for similarly underpowered trials that achieve similarly positive effects.
Meta-analyses of psychotherapy should incorporate p-hacking techniques to evaluate the likelihood that pattern of significant findings exceeds likely probability.
Adverse events and harms should routinely be reported, including lost opportunity costs such as failure to obtain more effective treatment.
We need to shift the culture of doing and reporting psychotherapy research. We need to shift from praising exaggerated claims about treatment and faux evidence generated to promote opportunities for therapists and their professional organizations. Instead, it is much more praiseworthy to provide robust, sustainable, even if more modest claims and to call out hype and hokum in ways that preserve the credibility of psychotherapy.
The alternative is to continue protecting psychotherapy research from stringent criticism and enforcement of standards for conducting and reporting research. We can simply allow the branding of psychotherapies as “evidence supported” to fall into appropriate disrepute.