Interventions to reduce stress in university students: Anatomy of a bad meta-analysis

Regehr C, Glancy D, Pitts A. Interventions to reduce stress in university students: A review and meta-analysis. Journal of Affective Disorders. 2013 May 15;148(1):1-1.

I saw a link to this meta-analysis on Twitter. I decided to take a look.

oh-myThe experience added to my sense that many people who tweet, retweet or “like” tweets about studies have either not read the studies or lack of basic understanding of research or both.

What I can redeem for my experience is a commentary is another contribution to screen and quickly dismiss bad meta-analyses.

I have written at length about screening out bad meta-analyses in Hazards of pointing out bad meta-analyses of psychological interventions.

In that blog post, I provide an excellent rationale for complaining about bad meta-analyses from Hilda Bastian. If you click on her name below, you can access an excellent blog post about bad meta-analyses from Hilda as well.

Psychology has a meta-analysis problem. And that’s contributing to its reproducibility problem. Meta-analyses are wallpapering over many research weaknesses, instead of being used to systematically pinpoint them. – Hilda Bastian

Unfortunately, this meta-analysis is behind a pay wall. If you have accessed through a University library, does not pose a problem, only the inconvenience of having to log into your University library website. If you are motivated to do so, you could request a PDF with an email one of the authors at Cheryl.regehr@utoronto.ca.

I don’t think you need to go to the trouble of writing the authors to benefit from my brief analysis. Particularly because you can see the start of the problems in accessing the abstract here.

And here’s the abstract:

Abstract

Background

Recent research has revealed concerning rates of anxiety and depression among university students. Nevertheless, only a small percentage of these students receive treatment from university health services. Universities are thus challenged with instituting preventative programs that address student stress and reduce resultant anxiety and depression.

Method

A systematic review of the literature and meta-analysis was conducted to examine the effectiveness of interventions aimed at reducing stress in university students. Studies were eligible for inclusion if the assignment of study participants to experimental or control groups was by random allocation or parallel cohort design.

Results

Retrieved studies represented a variety of intervention approaches with students in a broad range of programs and disciplines. Twenty-four studies, involving 1431 students were included in the meta-analysis. Cognitive, behavioral and mindfulness interventions were associated with decreased symptoms of anxiety. Secondary outcomes included lower levels of depression and cortisol.

Limitations

Included studies were limited to those published in peer reviewed journals. These studies over-represent interventions with female students in Western countries. Studies on some types of interventions such as psycho-educational and arts based interventions did not have sufficient data for inclusion in the meta-analysis.

Conclusion

This review provides evidence that cognitive, behavioral, and mindfulness interventions are effective in reducing stress in university students. Universities are encouraged to make such programs widely available to students. In addition however, future work should focus on developing stress reduction programs that attract male students and address their needs.

I immediately saw that this was a bad abstract because it was so uninformative. There are so many abstracts of meta-analyses freely available on the web. We need to be given the information to recognize when we are confronting an abstract of a bad meta-analysis, so that we can move on. I feel strongly that authors have responsibility to make their abstracts informative. If they don’t in their initial manuscripts, editors and reviewers should insist on improving the abstracts as a condition for publication.

This abstract is faulty because it does not give the effect sizes to back up its claims about the effectiveness of interventions to reduce stress in university students. It also does not comment in any way on the methodological quality of 24 studies that were included. To the unwary reader, it makes the policy recommendation of making stress reduction programs available to students and maybe tailoring such programs so they will attract males.

The authors of abstracts making such recommendations have a responsibility to give some minimal details of the quality of the evidence behind the recommendation. These authors do not.

When I accessed the article through my University library, I immediately encountered intheOpening of the introduction:

 On September 5, 2012, a Canadian national news magazine ran a cover story entitled “Mental Health Crisis on Campus: Canadian students feel hopeless, depressed, even suicidal” (1 ). The story highlighted a 2011 survey at University of Alberta in which over 50% of 1600 students reported feeling hopeless and overwhelming anxiety over the past 12 months. The story continued by recounting incidents of suicide across Canadian campuses. The following month, the CBC reported a survey conducted at another Canadian university indicating that 88.8% of the students identified feeling generally overwhelmed, 50.2% stated that they were overwhelmed with anxiety, 66.1% indicated they were very sad, and 34.2% reported feeling depressed (2 ).

These are startling claims and they require evidence. Unfortunately the only evidence that is provided is to secondary news sources.

Authors making such strong claims in a peer-reviewed article have responsibility provide appropriate documentation. In this particular case, I don’t believe that such extreme statements even belong in a supposedly scholarly peer-reviewed article.

A section headed Data Analysis seem to provide encouragement that the authors knew what they were doing.

 A meta-analysis was conducted to pool change in the primary outcome (self-reported anxiety) and secondary outcomes (self-reported depression and salivary cortisol level) from baseline to the post-intervention period using Comprehensive Meta-analysis software, version 2.0. All data were continuous and analyzed by measuring the standard mean difference between the treatment and comparison groups based on the reported means and standard deviations for each group. Standard mean differences (SMD) allowed for comparisons to be made across studies when scales measured the same outcomes using different standardized instruments, such as administering the STAI or the PSS to measure anxiety. Standard mean differences were determined by calculating the Hedges’ g ( ). The Hedges’ g is preferable to Cohen’s d in this instance, as it includes an adjustment for small sample bias. To pool SMDs, inverse variance methods were used to weigh each effect size by the inverse of its variance to obtain an overall estimate of effect size. Standard differences in means (SDMs) point estimates and 95% confidence intervals (CIs) were computed using a random effects model. Heterogeneity between studies was calculated using I 2 ( ). This statistic provides an estimate of the percentage of variability in results across studies that are likely due to treatment effect rather than chance ( ).

Unfortunately, anyone can download for a free trial the comprehensive meta-analysis software and a newer version 3.0 and get the manual with it []. The software is easy to use, perhaps too easy. One can use it to write a paper without really knowing much about conducting and interpreting a meta-analysis. You could put garbage into it, and the software would not register a protest.

The free manual provides text that could be paraphrased without knowing too much about meta-analysis.

When I’m evaluating a meta-analysis, I quickly go to the table of studies that were included. In the case of this meta-analysis, I immediately saw a problem in the description of the first study:

table-1-stress-reduction-meta-analysis-excerpt

The sample size of 12 students assigned to the intervention and 7 to the control group was much too small to be taken seriously. Any effect size would be unreliable, and could change drastically with adding or subtracting one study participant. Because the study had been published, it undoubtedly claimed a positive effect, that would undoubtedly represent something dodgy haven’t been done. Moreover, if you have seven participants in the control group, and you get significant results, the effect size will be quite large, because it takes a large effect to get statistical significance with only seven participants in the control group.

Reviewing the rest of the table, I can see that the bulk of the 24 studies that were included were similarly small, with only a few being my usual requirement to be taken seriously of at least 35 participants in the smaller of the intervention or control group. Having 35 participants only gives a researcher a 50% probability of detecting a moderate size effect of the intervention if it is present. If a literature is generating significant moderate-sized effects more than 50% of the time with such small studies, it is seriously flawed.

Nowhere do the authors tell us if the numbers they give in this table represent patients that were randomized or patients for whom data were available at the completion of the study. Most of the studies do not have a 50:50 ratio of intervention to control participants. Was that deliberate, by design, or always that arrived at by loss of participants?

The gold standard for a RCT is an intention-to-treat analysis, in which all patients who were randomized data available for follow-up, or some acceptable procedure has been used to estimate their missing data.

It is absolutely important that meta-analyses indicate whether or not the results of the RCTs that were entered into them were from intention-to-treat analysis.

It considered a risk of bias for an RCT not to be able to provide intention-to-treat analysis.

It is absolutely required that meta-analyses provide ratings of risk of bias by any of a standard set of procedures. I was reassured when I saw that the authors of the meta-analysis stated: “The assessment of the methodological quality of each study was based on criteria established in the Cochrane Collaboration Handbook.” Yet searching, nowhere could I see further mention of these assessments or how they had been used in the meta-analysis, if at all. I was left puzzling. Did the authors not do such risk-of-bias assessments, despite having said they had conducted them? Did they for some reason leave them out of the paper? Why didn’t an editor or reviewer catch this discrepancy?

gavel_side_md_clr-gifc200Okay, case closed. I don’t recommend giving serious consideration to a meta-analysis that depends so heavily on small studies. I don’t recommend giving serious consideration to a meta-analysis that does not take risk of bias into account, particularly when there is some concern that the studies available may not be of the best quality. Readers are welcome to complain to me that I been too harsh in evaluating the study. However, the authors are offering policy recommendations, claiming the authority of a meta-analysis, and they have not made a convincing case that the literature was appropriate or that their analyses were appropriate.

I’m sorry, but it is my position that people publishing papers in making any sort of claims have a responsibility to know what they are doing. If they don’t know, they should be doing something else.

And I don’t know why the editor or reviewers did not catch the serious problems. Journal of Affective Disorders is a peer-reviewed, Elsevier journal. Elsevier is a hugely profitable publishing company, which justifies the high cost of subscriptions because of the assurance that it gives of quality peer review. But this is not the first time that Elsevier has let us down.