Someone hiding behind the name Anon hacked my recent blog post in an attempt to force me to take the post down.
Since the incident, I learned the hacker had also been posting irrelevant comments on at least four blogs of other people and was blocked by them.
This blog post is a response to the hacking. I introduce a policy for my handling of any further rapid stream of comments from one person and provide replies to some of Anon’s comments found amidst the 21 or so s/he posted.
The earlier blog post
On February 2, I uploaded a blog post Offering US$500 to authors of Lancet article… The post challenged the authors of a controversial Lancet article to accept a wager. Their article concerned a clinical trial evaluating behavior therapy for unmedicated patients with schizophrenia spectrum disorder. My wager was that I could show that an effect size of 6.5 (I originally reported it inaccurately as 6.9, but both numbers are way out of expected values) in their abstract did not represent an accurate summary of the outcome of the clinical trial.
I felt the claims made in the article in general were outrageous and irresponsible. Worse, the claims were amplified in direct quotes attributed to the authors in the media.
I sought to get the authors engaged in a discussion with a growing chorus of critics, which up until now they had refused. If they did not respond to critics or my wager, I would seek a retraction from Lancet. The article was just that bad.
I proposed that I would pick a charity to which I would donate the US $500 if I was wrong and they could pick a bona fide charity to which they would donate if they could not prove me wrong. The, charities, science, and a public confused by the media reports would benefit from the wager.
Since then I solicited nominations for my charity. I chose an Irish charity offering psychiatric services to the deaf. I look forward to the authors of the Lancet article choosing a bona fide charity to which they would contribute if wrong, or maybe others putting up a similar amount to get the wager going. Think of it, if you think that I have wrong, you can separate a fool from his money.
After the blog post, I continued to engage social media, joining others in calling for a retraction of the article and apologies to the media, and if not that, release of basic data from the Lancet article so others could independently evaluate it.
As of this writing, a group of us have received an offer from Lancet to write a letter to the editor and BBC has changed the headline of their story from “Talking therapies effective as Drugs” to “Talking therapies moderately effective.” That is progress, but there is still work to be done.
An ironic note
The claim in the abstract of the Lancet article approximated one of the effect sizes claimed in a controversial meta-analysis appearing in JAMA. That meta-analyses was seriously flawed and intended to preserve insurance payments for long-term psychodynamic/psychoanalytic therapy in Germany. My recent blog post at PLOS Mind the Brain discussed the JAMA meta-analysis at length and zeroed in on the effect size as just one indicator of voodoo statistics.
What now seems ironic in relation to my current concerns with the Lancet CBT study, I had earlier joined with Aaron T. Beck and other colleagues in writing an extended critique of this JAMA article. Psychonalysts condemned our critique as representing a plot to discredit psychoanalytic research from sources with a conflict of interest. They assumed we were promoters of CBT who were protecting our brand against the suggestion that it was inferior to long term psychodynamic/psychoanalytic therapy.
Anyone who knows me can recognize the absurdity of that. Indeed, Dr. Beck and I became friends after he made overtures to me after I had delivered one of the most detailed and stringent critiques of the approach ever.
Now I find myself confronting what appeared to be another improbable effect size claimed by persons promoting CBT. Given the agitation on social media in which I am participating, maybe I will face a complaint that I am involved in some sort of plot to discredit CBT.
The stream of comments were soon longer than my blog post. The banner on my blog webpage was confusing to visitors because it seemed to imply that the string of comments placed by the hacker were part of a new blog post by me. The banner first read
13 Thoughts on “Offering US$500 wager to authors of Lancet article: help me pick a suitable charity”
But then the 13 was changed to 18 and then 21. The rising numbers corresponded to the increasing flood of comments posted so quickly that I was unable to insert responses attached to the individual posts. If I tried, my comments just ended up being displayed out of sequence to the hacker. Furthermore, I was unable for some reason to respond to some of the comments.
The hacker’s post of a claim that I was arguing with the studies study statistician (I am now sure that I was not) did not have a reply option.
This influx of comments was not an effort to engage me in any kind of dialogue . It left no room for that. The hacker’s comments tmoved to demanding that I pay US$500, apologize to the authors of the Lancet article, and take down my blog post.
To boot, I was going out to dinner and my failure to keep responding might suggest that I was avoiding a debate.
Not having confronted this kind of thing before, I had to think what would best policy to someone using comments on my blog to disrupt it. I decide that I will now limit commentators to two comments in a row without me making a response. If they go to three, they will be blocked.
I welcome advice from other bloggers as to how they handle such issues
Attacked by a Gish Gallop technique?
Someone indicated on my Facebook wall that maybe I was being subject to a Gish Gallop attack. I had to look that up.
The Gish Gallop, named after creationist Duane Gish, is the debating technique of drowning the opponent in such a torrent of half-truths, lies, and straw-man arguments that the opponent cannot possibly answer every falsehood in real time. The term was coined by Eugenie Scott of the National Center for Science Education. Sam Harris describes the technique as “starting 10 fires in 10 minutes.”
No, the hacker was actually making some intelligent points, but just at a rate that was deliberately disrupting my ability to respond.
Below I am going to respond to a sampling of the hacker’s comments. The hacker actually gave me the benefit of some things to think about in crafting a future post at PLOS Mind the Brain.
effect size of 6.9
“The term ‘effect size’ is frequently used in the social sciences, particularly in the context of meta-analysis. Effect sizes typically, though not always, refer to versions of the standardized mean difference. It is recommended that the term ‘standardized mean difference’ be used in Cochrane reviews in preference to ‘effect size’ to avoid confusion with the more general medical use of the latter term as a synonym for ‘intervention effect’ or ‘effect estimate’. The particular definition of standardized mean difference used in Cochrane reviews is the effect size known in social science as Hedges’ (adjusted) g.”
This first message was quite reasonable and invited a response from me. Maybe my accepting it allowed the hacker to post a stream of comments, raising the numbers on the misleading banner.
My reply is that I would certainly agree with this definition. But when readers encounter “effect size,” they generally assume what is intended is “standardized mean difference” unless there is some indication to the contrary. A reader encountering a claimed effect size of 6.5 in an abstract is not given much context, except that in an abstract one typically expects a conventional standardized mean difference.
After my original blog post, I noted at another blog site, Prof Shitij Kapur indicated similarly being startled by the figure in the Lancet abstract.
However, I would like to draw attention to wording in the abstract and the editorial that need further clarification. The abstract and editorial state that there is an effect size of 6.52. There are few treatments in medicine that have effect sizes of 1, so 6.5 would be massive indeed! In actual fact the between group effect size is 0.46 (and on the full reading of the paper the authors rightly report this). However, a casual reader may take away a mistaken impression that of the effect size as being 6.5 from just reading the abstract. The value 6.52 comes from the statistical modelling they used and is technically the ‘treatment effect’ in a statistical model, and NOT the effect size in the conventional use of that word.
This statement was validating, but I disagree with the 0.46. It was based on a follow up that by the end lost more patients than were retained. More about that later.
It is important to note that:
- Most people who encounter an abstract electronic bibliographic source like PubMed do not actually click to read or download the actual article.
- Often people offering authoritative judgments about an article in the media express views that suggest they have not actually read the article.
- Many exaggerations in press coverage can be traced to hype in abstracts.
- So, articles become known by what is said in their abstracts, as much is what is said in the body.
- Readers of the abstract assume what is labeled as the effect size is a number being provided to allow comparisons to other trials. Much of the psychotherapy intervention literature is concerned–one might even say pathologically obsessed–with effect sizes for comparisons between treatments and checking against the false reassurance provided by Jakob Cohen’s arbitrary designation of small, medium, large.
The next comment was
For most purposes simple (unstandardized) effect size is more robust and versatile than standardized effect size. Guidelines for deciding what effect size metric to use and how to report it are outlined. Foremost among these are: i) a preference for simple effect size over standardized effect size, and ii) the use of confidence intervals to indicate a plausible range of values the effect might take. Deciding on the appropriate effect size statistic to report always requires careful thought and should be influenced by the goals of the researcher, the context of the research and the potential needs of readers
Well, if you click on the link the hacker provided, you are taken to a manuscript that proposes a reliance on simple unstandardized effect sizes, rather than standardized ones. That is an interesting proposition worthy of consideration, but it is not common practice. Furthermore, if authors currently wants to provide a simple effect size, they would presumably label it a mean difference between groups in order to avoid confusion.
I pointed out to the hacker
The Lancet article authors never specified that this was a simple effect size. Furthermore, it is not at all clear how the authors calculated this particularly because at the end of follow-up less than half of the patients remained in what was already a small sample to start. Assumptions of nonrandom loss to follow-up no longer apply and so this is a misleading and even bogus effect size. I will say more about that in future blogs.
The hacker responded to my argument:
But the quotes suggest otherwise, and you know that’s true. You have shot out in ignorance and acted in a highly non-collegiate way. I suppose $500 is a lot of money. perhaps you need to appoint someone else to judge whether you need to cough up? I suggest Simon Wessely. why don’t you tweet him?
I know Simon Wessely and know that he has had death threats over his interpretation of a trial of CBT chronic fatigue. I do not think he wants to step back into the fray late judgments about a trial of CBT for unmedicated schizophrenics.
The hacker next posted:
One other thing. Your point seems to be that there exists a sizeable number of people who both understand effect sizes and think it possible to have a standardized effect size of 6.9? I think the two are probably mutually exclusive. If everyone in the CBT group scored zero on the PANSS after therapy then, assuming no change in the TAU group, the effect size would not be any bigger than 5. Keep on banging this drum if you like though James – you’ll be the only one to suffer.
Well, actually I got quite been acrimonious debate with the psychoanalytic/psychodynamic community when my colleagues and I suggested that an effect size of 6.9 was prima facie evidence of nonsense. And respected authorities such as the editor of JAMA and the best-selling author-psychiatrist Peter Kramer had obviously uncritically accepted this effect size.
But I think it was time to exit from discussion.
Postscript: After I repaired my blog site, I announced on Twitter that I would not accept on interrupted streams of comments of three or more. I immediately got a string of three from someone who identified himself as Inspector Brian Cameron. Maybe another coincidence, but a Google check reveals that it is same name as the Scotland Yard inspector in the classic movie about trying to drive someone insane, Gaslight.
I find parallels between the kinds of reactions that criticism of CBT elicit in the kinds of reactions that criticisms of Irving Kirsch elicited a while ago. I think I am fully cognizant of the limits on the safety and efficacy of antidepressants. In some circles I am known for the fuss I make about the low quality of routine care with antidepressants in the community in casual overprescribing without adequate follow up. My criticisms of Irving Kirsch’s declaration that antidepressants were no more effective than placebo received a lot of spamming and insertions of links to strange places in the comment sections of my blog posts.
I should expect strange behavior when I poke at strongly held views from the fringe and from ideologues. One of my first blog posts at PLOS Mind the Brain concern the antidepressant wars. Maybe when the dust settles on the CBT controversy, I can write something on the war to enthrone CBT over other therapies, but also to assert its superiority to medication.
Update (Friday, February 14, 2014): I realize that there is been some controversy on Twitter concerning what some consider to be my unusual offer to engage the authors of the Lancet article with a wager. Some even consider it uncollegial or unprofessional,
I did not invent the idea. Rather, I was inspired by one of the authors of theLancet article having previously used the approach to engage authors of a meta-analysis in a discussion of the effect sizes they claimed. As I understand it, the author of the Lancet article lost 50 pounds . So, not only is there precedent, but it is disingenuous of the authors of the Lancet article to challenge others with a wager and refuse to accept one themselves. Maybe they are discouraged by their loss of the last wager.