About that study evaluating mindfulness that you are reading…

“I’m reading a study evaluating mindfulness with a college student sample.”

“Don’t tell me the name of it, but let’s discuss it. It has positive findings showing the benefits of mindfulness in this population.”

“Yes, it does. How did you know that?”

“I really have not taken a look at many studies evaluating mindfulness with college students, but I know almost all studies evaluating mindfulness with any population are positive.”

“Really? That shows how beneficial practicing mindfulness is and everybody should be doing it.”

“It just shows how biased the literature is.

“A study in PLOS One Reporting of Positive Results in Randomized Controlled Trials of Mindfulness-Based Mental Health Interventions examined 124 randomized trials of mindfulness-based therapy (MBT) and found that results of almost 90% were presented as positive in the published reports. There were only three trials that were clearly presented as negative without making claims that MBT actually had positive effect in the study.”

“You are backing up my point, practicing mindfulness works wonders.”

“No, I’m getting to my point. The PLOS One study reports:

Of the 124 RCTs reviewed, only 21 (17%) were registered prior to data collection, even though 80 of the eligible RCTs were published recently (since 2010). When we examined trial registries, we identified 21 registrations of MBT trials listed as completed by 2010 and found that 13 (62%) remained unpublished 30 months after completion; of the published trials, all conveyed a positive conclusion.

“This is clear evidence that a publication bias in play.

“The PLOS One study goes on to say:

None of the 21 registrations, however, adequately specified a single primary outcome (or multiple primary outcomes with an appropriate plan for statistical adjustment) and specified the outcome measure, the time of assessment, and the metric (e.g., continuous, dichotomous). When we removed the metric requirement, only 2 (10%) registrations were classified as adequate.

“The study evaluating mindfulness with college students – is it positive for perceived stress or depressive symptoms?”

“It got significant results for perceived stress, but results for depressed symptoms only approached significance.”

“Ha ha, let’s not approach the issue of researchers reporting their results ‘only approached significance’…I would make too much fuss about either the positive or the negative findings.

“It tends to be easier to get results for perceived stress, because the measure is more subjective and didn’t necessarily have any real anchors in experience. Measures of depressive symptoms on the other hand refer more to specific symptoms that students may have noticed before they were asked about them with the questionnaire. But most students would have been the floor effect – they wouldn’t be distressed enough entering the trial to be able to report much of a reduction.”

“But the authors explain why they got an effect for stress, but not for depressive symptoms.”

“I’m sure they did, they could probably have explained just as well results obtained for depressive symptoms, but not stress.

“I don’t think they explained why they bothered to administer highly correlated measures or give evidence that they expected ahead of time that stress would be significant, but depressive symptoms were not. The article you read and report that the trial was registered, did it?”

“No, it did not mention that, but they did get those results.”

“We can only imagine what other measures they administered. It’s a common practice, especially with college students, to administer a whole bunch of self-report measures in a study and report mainly those that are significant.”

“So you are unwilling to make anything of mindfulness having an effect on stress, but not depressive symptoms in the study?”

“I wrote a paper reviewing studies that used  different measures of negative affect, like perceived stress and depressive symptoms. There is usually more differentiation in the names of the measures, than in what is actually measured.

“Lots of researchers who started out believing that they have found one measure of negative affect  that is strongly independent of other negative emotions end up discouraged. Paul Meehl complained that with hopelessly intercorrelated measures we are only studying the “crud factor.” Other investigators refer to the “big mush.” Try as we might, we are not usually measuring distinctly different things with measures of negative emotion that  different names, especially in samples in which participants don’t have very high negative affect.

“Your authors did not mention the intercorrelation of stress and depressive symptoms, which was undoubtedly high.”

“No, they did not.”

“… I’ll bet the study you are reading did not have an active control/comparison group, like relaxation or internet-delivered cognitive behavioral therapy.”

“How did you know that? You don’t know what study am reading, and you said you don’t read many studies of mindfulness with college students.”

“The US government’s Agency for Healthcare Quality and Research (AHCQR) contracted with Johns Hopkins University  to undertake a comprehensive evaluation of randomized trials of mindfulness-based treatments. They wanted to limit themselves to studies involved in active treatment. The investigators did a systematic search and identified 18,753 citations, but when they examined them, but only 3% (47) had an active control group. They concluded that evidence of the effectiveness mindfulness interventions is largely limited to trials in which it is compared to no treatment, wait list, or a usually ill-defined treatment as usual (TAU).

In our comparative effectiveness analyses (Figure 1B), we found low evidence of no effect or insufficient evidence that any of the meditation programs were more effective than exercise, progressive muscle relaxation, cognitive-behavioral group therapy, or other specific comparators in changing any outcomes of interest.

“So, they could not find evidence that giving people mindfulness training has any advantage over giving them something else, but it’s better than giving them nothing.”

“Not necessarily, differences between mindfulness and no treatment or waitlist control could simply be a matter of people signing up for a trial of mindfulness with the hope of getting assigned to it and then being left in the control group with a did nothing but fill out questionnaires. Those getting the mindfulness can simply be grateful for having gotten what they wished, having benefited from the positive expectations that were conveyed to them about what they were receiving, or the liking the group interaction.

“It’s considered important to control for such nonspecific factors in clinical trials, because their effects can be sizable, probably about the size of the effects observed in the study that you are reading.”

“If I understand what you’re saying, I don’t understand why these authors bothered to do a study that they didn’t registered ahead of time so they could prove that their results were obtained with the particular measure they were hoping to influence. I don’t understand why they didn’t include an active control group so they could convince skeptics like you.”

“People have lots of reasons for doing studies besides testing a hypothesis in a way that will allow them to be proven wrong. And if they don’t get positive results, they have trouble getting published. And some of their colleagues make them feel foolish because they couldn’t get positive results when everybody else seems to.”

ebook_mindfulness_345x550I will soon be offering e-books providing skeptical looks at mindfulness and positive psychology, as well as scientific writing courses on the web as I have been doing face-to-face for almost a decade.

Sign up at my new website to get advance notice of the forthcoming e-books and web courses, as well as upcoming blog posts at this and other blog sites.  Get advance notice of forthcoming e-books and web courses. Lots to see at CoyneoftheRealm.com.