Use of scales to assess risk for a suicide attempt wastes valuable clinical resources

A Quick Digest

photo of predicitive accuracy

I consider this an important study with some clear takeaway messages. Fortunately, this publication is currently available open access, despite being  in a usually paywalled journal. A link to the article.

Below, I identify some takeaway messages, the support for them and I provide some links for those seeking a better understanding of key concepts.

Takeaway messages

Our findings suggest that risk scales on their own have little role in the management of suicidal behaviour.

For example, one of the best performing scales, the Manchester Self-Harm Rule, captured 97 out of every 100 repeat episodes, but incorrectly classified 80/100 of episodes that did not lead to repetition as high risk. Of 100 episodes rated as high risk only 30 resulted in repetition. The scales performed no better (and in some cases significantly worse) than simply asking clinicians orpatients what they thought of the future risk.


At a time of increased service pressures it might even be argued that the use of risk scales to determine patient management actually wastes valuable resources.

I would add

  • The statement about clinicians is not so much an endorsement of clinical expertise, as the usefulness of clinicians simply and directly asking patients about their risk for attempting suicide.
  • Use of validated cutpoints on standardized measures did not fare well.
  • The study was intended to capture assessment of  real patients in assessed in a real world clinical settings, referrals to psychiatric services after self-harm.
  • These patients had an overall rate of subsequent attempts of 30%. The poor performance of standardized scales in this setting is actually better than the scales would have fared with using these instruments in the community where there was a lower rate of subsequent self-harm.
  • The research concerned prediction of a subsequent attempt at self-harm, not death by suicide. Even a sample of  483 patients referred  after an attempt is far too small to yield enough subsequent deaths to perform meaningful statistical analyses.
  • So, the poor performance of these assessment tools in predicting attempts in a relatively high population would be much worse in predicting death by suicide, especially in lower risk community settings.
  • Screening for self-harm with standardized tools is not an evidence-based practice.

Excerpts from the Abstract


A multisite prospective cohort study was conducted of adults aged 18 years and over referred to liaison psychiatry services following self-harm. Scale a priori cut-offs were evaluated using diagnostic accuracy statistics. The area under the curve (AUC) was used to determine optimal cut-offs and compare global accuracy.


In total, 483 episodes of self-harm were included in the study. The episode-based 6-month repetition rate was 30% (n = 145). Sensitivity ranged from 1% (95% CI 0–5) for the SAD PERSONS scale, to 97% (95% CI 93–99) for the Manchester Self-Harm Rule. Positive predictive values ranged from 13% (95% CI 2–47) for the Modified SAD PERSONS Scale to 47% (95% CI 41–53) for the clinician assessment of risk.

See below for links to discussions of sensitivity and specificity. These numbers have a maximal range of 0% to 100%.

The AUC ranged from 0.55 (95% CI 0.50–0.61) for the SAD PERSONS scale to 0.74 (95% CI 0.69–0.79) for the clinician global scale. The remaining scales performed significantly worse than clinician and patient estimates of risk (P<0.001).


Risk scales following self-harm have limited clinical utility and may waste valuable resources. Most scales performed no better than clinician or patient ratings of risk. Some performed considerably worse. Positive predictive values were modest. In line with national guidelines, risk scales should not be used to determine patient management or predict self-harm.

Additional readings for key concepts

Positive and negative predictive value

Sensitivity and specificity

Quick Digests is an experiment in providing succinct digests of current and classic sources. Feedback welcomed as to whether this quick bits are useful and how they could be made more useful.

I blog at a number of different sites including Quick Thoughts, PLOS blog Mind the Brain, and occasionally Science-Based Medicine. To keep up on my writing and speaking engagements and to get advance notice of e-books and web-based courses, please sign up at