Continuing to probe studies of mindfulness-based stress reduction (MBSR) for health problems, I turned to some contradictory claims that an investigator had made about her trial of MBSR for improving the sleep of cancer patients.
- I noticed things in a CONSORT flowchart in the article that the editor and reviewers should have flagged as a serious limitation of the study and one noteworthy of acknowledgment and discussion.
- What I saw undercut the validity of complicated statistical analyses on which the author’s claims depended, as well as any credibility to claims about efficacy of MBSR.
- Promoters of MBSR desperately need to demonstrate that the treatment is as good as or better than alternatives. This study does not contribute credible, favorable evidence, despite being dressed up to do so.
- It is time to attach an expression of concern to MBSR studies:
Warning! Likely to contain exaggerations and distortions favoring MBSR. Not suitable as a basis for decision-making as to whether to seek, provide, or commit public resources to MBSR.
There’s a growing sense that claims about MBSR are overblown and based on spun and low-quality evidence largely generated by enthusiasts and promoters with undeclared conflicts of interest. I had thought, though, that someone who is motivated but not caught up in all the fanfare could come to an independent judgment of the available literature. Well, it takes too much work.
I’m losing confidence that anyone can evaluate MBSR studies without a concerted effort to cut through hype and hokum, probing to a level of detail that the quality of evidence ultimately does not justify.
Simply put, it takes too much effort for outsiders – researchers, clinicians, and patients – to grasp how they are being misled by the mindfulness literature.
What we don’t know about MBSR for sleep problems
A comprehensive systematic review and meta-analysis prepared for the US Agency for Healthcare Research and Quality (AHRQ):
Goyal M, Singh S. Sibinga EMS, et al. Meditation programs for psychological stress and well-being: a systematic review and meta-analysis. JAMA Intern Med. Epub Jan 6 2014. doi:10.1001/jamainternmed.2013.13018.
Reviewed 18,753 citations, and found only 47 trials (3%) with 3515 participants that included an active control treatment.
The dismal conclusion:
We found low evidence of no effect or insufficient evidence of any effect of meditation programs on positive mood, attention, substance use, eating habits, sleep, and weight.
The results of the study that I’m going be discussing became available after the systematic review. The primary outcome paper was published in a prestigious journal. Maybe, I had hoped, it could represent a sorely needed contribution to the limited evidence available for strong claims that MBSR is a cure for whatever ails you. No, it was not.
But why should we expect MBSR to improve sleep? A Buddhist neuroscientist expressed doubt.
Willoughby Britton, PhD is a clinical psychologist, neuroscience researcher, and Buddhist practitioner. As Assistant Professor of Psychiatry and Human Behavior at Brown University Medical School, she specializes in research on meditation in education and as treatment for depression and sleep disorders. She was interviewed by Tricycle, a respected magazine of Buddhist thought that has been around since 1990. She was asked: “Is the data better for some applications of meditation than others?”
I have done very careful reviews of the efficacy of meditation in two areas in which there are high levels of popular misconception about how much data we have: sleep and education. The data for sleep, for example, is really not that strong. And the AHRQ article concurs: it judges the level of evidence for meditation’s ability to improve sleep as “insufficient.”
What I found from my study was that meditation made people’s brains more awake. From a very basic brain point of view, what happens in your brain when you fall asleep? The frontal cortex deactivates. Nobody agrees what meditation does to the brain, but across the board, one of the most common findings is that meditation increases blood flow and activity in the prefrontal cortex. So how is that going to improve sleep? It doesn’t make any sense. It is completely incompatible with sleeping if you are doing it right. And we know that people stop sleeping when they go on retreats. That is never reported in scientific publications, even though it is well known among practitioners.
A tale of a study of MBSR and CBT to improve sleep problems in cancer patients thrice told.
The primary report of the study appeared in the prestigious Journal of Clinical Oncology:
Garland, S. N., Carlson, L. E., Stephens, A. J., Antle, M. C., Samuels, C., & Campbell, T. S. (2014). Mindfulness-based stress reduction compared with cognitive behavioral therapy for the treatment of insomnia comorbid with cancer: A randomized, partially blinded, noninferiority trial. Journal of Clinical Oncology, JCO-2012.
The article concluded:
Although MBSR produced a clinically significant change in sleep and psychological outcomes, CBT-I was associated with rapid and durable improvement and remains the best choice for the nonpharmacologic treatment of insomnia.
A conference abstract reporting the study published the same year concluded:
While both CBT-I and MBSR produced significant improvement in sleep and psychological outcomes, a more rapid change occurred in CBT-I.
The principal investigator’s review of her own work published two years later concluded:
These findings indicated that while MBCR was slower to take effect, it could be as effective as the gold-standard treatment for insomnia in cancer survivors over time.
Delving into the details of the study
From the abstract:
This was a randomized, partially blinded, noninferiority trial involving patients with cancer with insomnia recruited from a tertiary cancer center in Calgary, Alberta, Canada, from September 2008 to March 2011. Assessments were conducted at baseline, after the program, and after 3 months of follow-up. The noninferiority margin was 4 points measured by the Insomnia Severity Index. Sleep diaries and actigraphy measured sleep onset latency (SOL), wake after sleep onset (WASO), total sleep time (TST), and sleep efficiency. Secondary outcomes included sleep quality, sleep beliefs, mood, and stress.
Of 327 patients screened, 111 were randomly assigned (CBT-I, n _ 47; MBSR, n _ 64). MBSR was inferior to CBT-I for improving insomnia severity immediately after the program (P < .35), but MBSR demonstrated noninferiority at follow-up (P <.02). Sleep diary–measured SOL was reduced by 22 minutes in the CBT-I group and by 14 minutes in the MBSR group at follow-up. Similar reductions in WASO were observed for both groups. TST increased by 0.60 hours for CBT-I and 0.75 hours for MBSR. CBT-I improved sleep quality (P < .001) and dysfunctional sleep beliefs (P <.001), whereas both groups experienced reduced stress (P < .001) and mood disturbance (P< .001).
[For more information about a noninferiority trial (NI), see here.]
The objective of non-inferiority trials is to compare a novel treatment to an active treatment with a view of demonstrating that it is not clinically worse with regards to a specified endpoint.
Investigators commit themselves to a pre-set difference between the two interventions that would satisfy them that the treatment was inferior, if they found it. In an earlier blog post, I noted that NI RCTs have a reputation for methodological flaws and bias:
An NI RCT commits investigators and readers to accepting null results as support for a new treatment because it is no worse than an existing one. Suspicions are immediately raised as to why investigators might want to make that point.
The trial had no control group from which it could be determined whether the benefits of either intervention exceeded what would be obtained with a nonspecific treatment that had no active ingredient beyond positive expectations, support, and attention.
The results of the trial were analyzed both intent to treat and per protocol. The intent-to-treat analyses included all patients who where randomized, regardless of the extent to which they actually attended treatment. The per-protocol analyses included only patients who attended at least 5 sessions.
The description of the analyses are likely to dazzle most readers and impress them that the authors knew what they were doing in applying sophisticated techniques- that is, if readers are unfamiliar with these techniques and the assumptions they make.
For each of the models, the random effect was participant, and the fixed effects were group (MBSR or CBT-I), time, baseline value, and the group-time interaction. Time was also set as a repeated measure. The restricted maximum likelihood estimate method was used to estimate the model parameters and SEs with a compound symmetry covariance structure to account for the correlation between measurements. We used type III fixed effects (F and t) and set the statistical significance of P values at P<.05.
A skeptic would figure out that the authors probably had to contend with a lot of missing data.
This excerpt from the CONSORT flow chart tracks what happened after random assignment to either CBT or MBSR.
Only half of the patients assigned to MBST actually attended the pre-set minimal number of sessions. Of 64 patients, 22 withdrew, another 2 attended no sessions, and 8 attended less than 5. CBT fared considerably better, with only 7 patients either not attending or withdrawing.
At 5 month follow up, the situation for MBSR worsened, only 27 patients – a minority of patients who had been randomized – were left to provide data for analysis.
The authors adopted their complex analytic strategy to compensate for missing data. The strategy involves basically using all available data to guess what the results would have been for individual patients if their data had been available. Yup, they were inventing data based on a best guess.
These sophisticated techniques are only valid if most data for most patients remain available and the assumption can be made the loss of patients is random. But in this case, loss was not random, patients assigned to MBSR were less likely to stick around. We are not in position to know, but there is undoubtedly other nonrandom loss
Results that depend so much on guesstimates from so much missing data that are not reliable or generalizable.
The study started out small and got smaller because of patient attrition.
The study did not have a nonspecific control group. Yet, judging from the rest of the literature, it is unlikely that a superiority of MBSR over nonspecific treatment could be demonstrated with such a small sample, and certainly with the sample left after attrition.
Most psychotherapy research experts would not expect such a small study to be able to detect a difference between two active treatments. So, calling this a “noinferiority trial” is a cop out that serves to hide the low likelihood of finding a difference.
Appreciate what the author is asking of us – that we revise our appraisal of MBSR for insomnia from “weak or no evidence” to “equivalent to gold standard treatment” on the basis of this study. We are asked to do this based on what shrunk to an underpowered study in which most patients assigned to MBSR weren’t around for follow up and there is a heavy reliance on tortured , post hoc analyses of secondary outcomes. No, thank you.
To improve the credibility of their claims, MBSR desperately need to demonstrate that the treatment is as good or better than alternatives. This study is not a fair demonstration of that. The high rate of nonretention of patients after being assigned to MBSR should be quite troubling to anyone promoting MBSR for whatever ails you.