My peer review of a PACE investigators’ article that the authors refused to heed

 

UPDATED 7/18/2017: OK, Michael Sharpe, I should not make fun of a serious matter. Being an investigator on the PACE trial has attracted you a lot of ridicule and cruel jokes.

I get it that 400 peer reviewed publications don’t qualify me as a reviewer of your paper, I am just not seasoned enough. But  could you 

maybe show me what you look for in a reviewer worthy of evaluating your manuscript?

mooning elf

“I am sure you could find many reviewers who are more qualified and who would do this. Maybe, you should advertise in social media.”

 

 

Chronic fatigue syndrome or myalgic encephalomyelitis (CFS/ME for short) is one of these.  The particular issue is the role of psychiatric or psychological approaches in the treatment of such patients.  Protest against this form of enquiry has been present for decades.  However, the increasing use of social media and blogs have co-ordinated and expanded the protest to an international one.  

-Professor Michael Sharpe

Yes, Professor Sharpe, Viva los blogs!  Viva los internationales! Your group has controlled  peer review. for too long and you are losing your grip.

hatfield mccoyAuthors do themselves a disservice by refusing to make changes suggested by reviewers, even when they have the power to do so.

I offered a tough-minded review of Peter White, Trudie Chalder, and Michael Sharpe’s Response to the editorial by Dr Geraghty   that recently appeared in Journal of Health Psychology with almost no responsiveness to my critique. The authors offered a partial correction of the authors’ misrepresentation of CONSORT as a guideline for conducting a clinical trial, rather than a checklist for reporting.

I don’t think that there was any ambiguity as to my identity or that I would be less than enthusiastic about what seemed to the authors’ hastily written response to Keith Geraghty.

Presumably Editor David Marks was of the same opinion as me about this general principle: Reviewers who have publicly disagreed with authors can often nonetheless offer valuable feedback, especially when editors interpret their reviews with their pre-existing opinion in mind – and the reviewer’s expertise.

I present my review below. I invite comparison to the published paper. I also invite the authors to share their bullying response, in which they threatened to get COPE involved. As empty and silly as a threat as what we have had to become accustomed to from Donald Trump. Make my day, Peter, Trudie, and Michael, proceed with  a complaint to the Committee on Publication Ethics (COPE).

The authors deserve to have an opportunity to respond to criticism of their work. However, this should not become an occasion for their repeating themselves to the point of self plagiarism, their invoking of dubious sources, or for their ignoring past criticisms.

I think there is a broader issue here – the effect of their trial and the way they have conducted themselves on the overall untrustworthiness of psychology, and health psychology interventions in particular. They flagrantly disregard basic standards for conducting and interpreting a randomized clinical trial, switch outcomes providing a dramatic demonstration of P hacking, attempt to block publication of criticism, and refuse to share their data, even when they have published with the requirement that it be made available. Particularly in this journal, it is important that these matters not be ignored. For the authors’ comment to be published in its present form, it would invite associating the journal with endorsement of untrustworthy psychological science.

For instance, the authors make a spirited defense of their right to switch outcomes. But they don’t address the good reasons that so many psychologists are fighting against such P-hacking, or for that matter, anyone concerned with the integrity of clinical trials. They are essentially arguing that they should have an exception, without acknowledging the important reasons why they are typically not granted. Should they be given a free pass to ignore efforts to reform both psychology and clinical trials?

The authors’ opening point about adherence to CONSORT displays an embarrassing ignorance of CONSORT. Surely they jest. The checklist is concerned not about the adequacy with which a trial is conducted, but the adequacy of the reporting of it. A randomized trial can be abysmal in its conduct, but possibly score close to perfect with the CONSORT checklist, if its flaws are appropriately transparently reported. The authors have made their point in numerous places and have been corrected. That they persist in making it reflects on the seriousness with which they approached responding to the paper which criticize them.

The authors point to patient involvement on the Trial Steering Committee and Data Monitoring and Ethics Committee. Anyone knowledgeable of international standards for these kinds of committees would find it astonishing that the authors/investigators themselves were involved on these committees particularly the data monitoring and ethics committee. Having been involved in numerous such committees, I think their presence would have the perception of compromising the requisite independent judgment of the committee. Patient advocates could legitimately question whether patients on the committee were representative or free from coercion from the investigators.

The short time in which one of the key papers was under review at The Lancet raises serious questions about the adequacy of its peer review. Moreover, the refusal by the editor of Psychological Medicine to consider a letter to the editor based on re-analyses using their originally selected primary outcomes raises issues about the integrity of peer review, both prepublication and post publication.

It’s extraordinary that 16 papers would come out of a single psychotherapy trial. The authors seem to be pointing to that as an accomplishment, but for others who are concerned with broader issues, it raises issues about duplicate publication and the integrity of the peer review under which those 16 papers were published.

I of course assume the Journal of Health Psychology is intended to reach an international audience and be responsive to international standards. I don’t think the author should be allowed to ignore the US committees which operated under considerable orderly and transparent rules in rejecting their diagnostic criteria and their assertions about the effectiveness of the treatment.

Their citing of  Action for ME (2011) is inappropriate for a number of reasons. It’s an unrepresentative that was not subject to peer review. The claims that patients believe their treatments lead to improvement of contradicted by the extraordinary petitions signed by thousands.

I’m not sure that citing of Action for ME (2011) is appropriate in this context, but the authors make a self-serving lumping of response categories and inaccurately portray the survey, Action for ME (2011). Given that Graded Exercise Therapy was also assessed the figures for it should also be given: 31% had received GET and 48% thought that it should be made available.

These are the kind of issues that would be missed by a casual reader, even a very intelligent and methodologically sophisticated reader who is simply not familiar with this literature and the authors as partisans for particular perspective on it that is not shared by all.

Advocates for the improvement of the trustworthiness of the psychological literature should be particularly offended by the distorted view offered in their point (3). They are dodging the very important issue of not only investigator allegiance, but investigator conflict of interest. Raising this issue about other interventions in other contexts have led to dozens of corrections and erratum. (I think it’s important that if this reply is published in any form, it should be accompanied by an editor-approved declaration of conflict of interest).

I’m quite familiar with the authors’ 2015 Lancet Psychiatry paper, but it is unrecognizable in the way is described in this comment. Numerous reasons why the authors cannot interpret the follow-up data in the way they do here and elsewhere are presented in letters to the editor, which the authors ignore. This is not the place to elaborate, but basically the authors abandoned their protocol and a substantial portion of the patients available for the follow-up were no longer receiving the treatment to which they were assigned.

The authors substantially misrepresent the use of Freedom of Information Act requests and their response to them. They exaggerate the number of requests that were made by counting multiple times any request that involved multiple variables. Furthermore, they misrepresent their responsiveness. They have released their data when they were involved as authors and had control over subsequent publications. They misrepresent the multiple times they’ve invoked the excuse that people who requested the data were vexatious.

I find it odd that they retreat to a blog post by Simon Wessely as a defense of their many methodological problems. The blog post was not peer reviewed and received a huge amount of substantive criticism. At best, Simon Wessely invokes his authority in place of evidence. Surely the authors can do better than an authority-based argument. It is my opinion that they should not embarrass themselves by bringing it the blog post here. If nothing else, I think they should respect the journal as a more formal forum.

Their excuse that they do not release their data because the consent forms do not allow it was argued in proceedings that cost them over 250,000 pounds. The final decision of the lower tribunal soundly rejected this excuse after reviewing it in explicit detail, including direct testimony from the author group. Here, as elsewhere in their reply, they are pleading for an exceptionalism which I cannot understand the basis.

It is debatable that improvement rates of 20% and 21% over 10% for the SMC alone group justifies a claim that the therapies “moderately improve” outcomes. But the authors do not exposed readers to this issue, just gloss over it.

I could continue with some serious substantive methodological, statistical, and interpretive issues. However, I think I have sufficiently established that the authors have not made effective use of their opportunity to reply to the editorial. Any effort to continue to exercise that option would have to be with a thoroughly revised manuscript requiring another peer review. What we see in the present version, however, is a thorough rejection of international standards, as well as the principled reasons behind efforts to improve the conduct and reporting of psychosocial interventions in clinical and health psychology.

There is another possibility available to the journal, however. Simply published the authors’ response as is, but allow reviewers to respond in print and to point out that the authors have repeated themselves, citing instances of this, and that the authors resisted encouragement to revise the manuscript from its present form. Rather than leaving this all behind the curtains, early investigators could get an interesting look into the process of challenging bad science and efforts to resist these challenges.