Is it raining on the suicide prevention parade to point out that promising interventions are not effective?

A death by suicide is unfortunate but not necessarily readily avoidable.

Suicide prevention programs receive intense discussion in social media, some fueled by emotion and some fueled as well by undisclosed conflicts of interest of those who financially benefit from such programs, regardless of whether they are effective.

We have to resist the urge “Oh my God, we have to do something, anything… We can’t just do nothing!”. If we just do “something,” we often find we have squandered scarce resources, disrupted receipt of services in times of need with unintended consequences, and created the illusion that we have to do nothing more because we’ve already done something.

Nonetheless, to point out some attractive and promising approaches do not work invites a lot of criticism.

I recently posted at PLOS Blog Mind the Brain an analysis of a paper in The Lancet paper reporting a high profile European multicountry suicide prevention effort targeting teens.

I was suspicious immediately of claims in the abstract and a curious avoidance of actual numbers of suicidal acts that were claimed to have been prevented.

I soon realized that I was dealing with one of the most ambitious ever trials of strategies to reduce suicide among teens, but the result was being downplayed that none of three interventions had an effect on self-reported suicidal acts or suicidal ideation. Furthermore, the huge 11,000 student sample was far too small to demonstrate any effects on actual death by suicide.

A take away message is the promising program – Youth Aware of Mental Health Programme (YAM) – being promoted by one of the authors in both the write up of this study and in a systematic review is ineffective and does not deserve status as a “best practice.

Another important message that was hidden from view was that screening and referral programs – in this case, the Screening by Professionals programme (ProfScreen) do not work when implemented on a large scale.

A third message was that a manualized gatekeeper program, Question, Persuade, and Refer (QPR), was not effective either.

An earlier blog post argued

Use of scales to assess risk for a suicide attempt wastes valuable clinical resources.

I discussed a carefully done study that showed that no commonly used instruments to assess suicidality validly predicted subsequent acts. The results of the study were clear-cut and unambiguous. Yet after I posted my blog, I got queries “okay, then which instrument should we use to screen people in the community for risk of suicide?” The only answer I can come up with was don’t engage in that activity with any expectation that you will reduce the risk of suicide.

Am I raining on the anti-suicide parade, sowing doubt where I should be giving out hope or at least propping up the messages of those who are trying to spread hope? Must we be positive in our reviews?

I recall my former Chair at Groningen admonishing me that I shouldn’t keep up what I was doing because it risked the department becoming a Department of Complaints. I would’ve thought that a good thing to be, if it was a reputation built on exposing interventions that didn’t work that everyone else thought worked.

I don’t think that that having to be  positive is a proper role of someone evaluating evidence. Ultimately, if we adopt this strategy, we will be encouraging wasting of resources and who knows what unintended consequences.

I think a quote from Richard Feynman is particularly appropriate:

“We are trying to prove ourselves wrong as quickly as possible, because only in that way can we find progress.”

I blog at a number of sites, PLOS blog Mind the Brain, Quick Thoughts, and occasionally, Science-based Medicine. To receive alerts about all my blog posts, just sign up at CoyneoftheRealm.com. You’ll get advance notice of forthcoming e-books in science writing courses as well.