William Hollingworth and his colleagues must been pleased when they were notified that their manuscript had been accepted for publication in the prestigious (Journal impact factor =18!) Journal of Clinical Oncology. Their study examined whether screening for distress increased cancer patients’ uptake of services and improved their mood. The study also examined a neglected topic: how much did screening cost and was it cost-effective?
These authors presented their negative findings in a straightforward and transparent fashion: screening didn’t have a significant effect on patient mood. Patients were not particularly interested in specialized psychosocial services. And costing $28 per patient screened, screening did not lower healthcare costs, nor prove cost-effective.
This finding has significant implications for clinical and public policy. But the manuscript risked rejection because it violated proponents of screening’s strictly enforced confirmation bias and obliged conclusion that screening is cheap and effective.
Hollingsworth and colleagues were surely disappointed to discover that their article was accompanied by a negative editorial commentary. They had not been alerted or given an opportunity to offer a rebuttal. Their manuscript had made it through peer review, only to get whomped by a major proponent of screening, Linda Carlson.
After some faint praise, Carlson tried to neutralize the negative finding
despite several strengths, major study design limitations may explain this result, temper interpretations, and inform further clinical implementation of screening for distress programs.
And if anyone tries to access Hollingworth’s article through Google Scholar or the Journal of Clinical Oncology website, they run smack into a paywall. Yet they can get through to Carlson’s commentary without obstruction and download a PDF for free. So, easier to access the trashing of the article than the article itself. Doubly unfair!
Pigs must wear lipstick to win acceptance
Advocates from professional organizations insist on conclusions supporting screening and that negative findings be made up to appear to support their views as conditions for getting publishing. Reflecting these pressures, I have described the sandbagging of a paper I had been invited to submit, with reviewers insisting I not be so critical of the promotion of screening.
Try this experiment: Ignore what is said in abstracts of screening studies and instead check the results section carefully. You will see that there are actually lots of negative studies out there, but they have been spun into positive studies. This can easily be accomplished by authors ignoring results obtained for primary outcomes at pre-specified follow-up periods. They can hedge their bets and assess outcome with a full battery of measures at multiple timepoints and then choose the findings that make screening looked best. Or they can just ignore their actual results when writing abstracts and discussion sections.
Especially in their abstracts, articles report only the strongest results at the particular time point that make the study looked best. They emphasize unplanned subgroup analyses. Thus, they report that breast cancer patients did particularly well at 6 months, and ignore that was not true for 3 or 12 month follow up. Clever authors interested in getting published ignore other groups of cancer patients who did not benefit, even when their actual hypothesis had been that all patients would show an improvement and breast cancer patients had not been singled out ahead of time. With lots of opportunities to lump, split, and selectively report the data, such results can be obtained by chance, not fraud, but won’t replicate.
When my colleagues and I undertook a systematic review of the screening literature, we were unable to identify a single study that demonstrated that screening improved cancer patient outcomes in a comparison to patients having access to the same discussions and services without having to be screened. But there are four other reviews out there, all done by proponents of screening, that gloss over this lack of evidence for screening. The strong confirmatory bias extends to reviews.
Doing wrong by following recommended guidelines.
Hollingworth and colleagues implemented procedures that followed published guidelines for screening. They trained screening staff with audiovisual aids and role-playing. They developed guides to referral sources. They tracked numbers of discussions with distressed patients and referrals. Mirroring clinical realities, many other screening studies involve similar levels of training and resources. Unless cancer centers have special grants or gifts from donors, they probably cannot afford much more than this. And besides, advocates of screening have always emphasize that it is a no or low cost procedure to implement.
The invited commentary.
Carlson’s title seems to represent a revision in what implementation of screening requires and may represent more than many settings can afford.
Screening Alone Is Not Enough: The Importance of Appropriate Triage, Referral, and Evidence-Based Treatment of Distress and Common Problems
Perhaps these more expensive requirements will prompt a closer look whether screening actually improves patient outcomes, and at acceptable costs—a refocusing on the evidence whether screening actually benefits patients is overdue.
The 16 references of Carlson’s invited commentary involve eight citations of her work and her close colleague Barry Bultz. She is fending off a negative finding and collecting self-citations too.
Like many such commentaries. Carlson’s creates false authority by selective and inaccurate citation. If you take the trouble to actually look at the work that is cited, you will find much of it does not present original data and citations are not accurate or relevant– although it is not obvious from the commentary.
For instance, at the outset, Carlson claims “psychosocial interventions tend to pay for themselves. Many times over in subsequent cost offsets” and cites two of her own papers with strikingly similar abstracts. Neither of these papers presents original data, but instead rely on Nick Cummings’ claims made when he was promoting his efforts to earn millions organizing behavioral health carveout companies. These claims are now considered as dubious as Cummings’ claims that he and his staff “cured” thousands of gays and lesbians of their sexual orientation.
Carlson sees particularly upset that the efforts of Hollingsworth and colleagues resulted in so few referrals to psychologists. She claims that this
represents a substantial departure from evidence-based treatment for distress, a significant failure of screening and triage.
Actually, these rates are quite consistent with other studies, including Carlson’s own. Most cancer patients who are found to be distressed with screening are not interested in intensive specialty psychosocial or mental health services. Rather, they are more interested in getting informal support and information, and, among specialized services, nutritional guidance and physical therapy. Much of the advocacy for screening has simply assumed that the services cancer patients want are primarily psychosocial or mental health services, and particularly formal psychotherapy. This can lead to misallocating scarce resources.
Our various studies in the Netherlands find that the proportion of cancer patients seeking specialty mental health services after diagnosis is about the same as the proportion who were getting those services beforehand. We find it takes about 28 hours of screening to produce one referral to a specially psychotherapy. Not very efficient.
The big picture.
Invited commentaries represent one form of privileged access publishing by which articles come to be found in prestigious, high impact journals with no or only minimal peer-reviewed. When they are listed on PubMed or other electronic bibliographic resources, there are typically no indications that commentaries evaded peer review, nor is there usually any indication provided in article itself. One has to learn to be skeptical and to look for evidence, like gratuitous inaccurate citations.
Invited commentaries come about with reviewers indicating a wish to comment on an article that seems likely to be accepted. Most typically, there is a certain cronyism in lavishing praise on articles done by colleagues doing similar work. Carlson’s commentary is less common in that it is intended to neutralize the impact of a manuscript that was apparently going to be accepted.
We need to better understand such distortions in the process by which “peer review” controls which papers get published and what they are required to say to get published. Articles published in high impact journals are not necessarily the best papers. They do not necessarily represent an adequate sampling of available data.
The Hollingworth study is only one example of a transparently negative study that made it through the editorial process at a high impact journal. But it is also an example of a study successfully defying confirmation bias and getting whomped. It remains to be seen whether this study suffers a subsequent selective ignoring in citations, like some other negative studies in psycho-oncology.
We don’t know how many such studies don’t get through. Or in order to get through had to get a makeover with selective reporting, perhaps at the insistence of reviewers. It is thus impossible to quantify the distorting impact of confirmatory bias on the published literature. But sightings of black swans like this one clearly indicate that not all swans are white. We need to be skeptical about whether published studies represent all of the available evidence.
I recommend skeptical readers look for other commentaries, particularly in Journal of Clinical Oncology. I have documented this high impact journal as not having the best and most accurately reported psychosocial studies of cancer patients. It is no coincidence that many of the flawed studies about which I’ve complained were accompanied by laudatory commentaries. Check and you will find that commentators have often published similarly flawed studies with a positive spin.
What’s a reader to do?
Readers can write letters to the editor, but Journal of Clinical Oncology has a policy of allowing authors to veto publication of letters critical of their work. Letters to the editor are usually impotent form of protest anyway. They are seldom read by anyone except for the authors who are being criticized. And authors do agree to be criticized, they get the last word, often even ignoring what is said in a critical letter to the editor.
But fortunately, there is now the option of continued post publication peer review through PubMed Commons. Once you register, you can go to PubMed and leave comments about both the Hollingworth study and the unfairness of the commentary by Carlson. And others can express approval of what you write or add their own comment. Look for mine already there, challenging the unfair editorial commentary and expressing concern for the unfair treatment of the paper by Hollingworth and colleagues. You can come and dispute or agree with what I say.
The journals no longer control the post-publication review process. Linda Carlson can get involved in the discussion at PubMed of Hollingworth’s article in JCO, but she cannot have the last word.
This work is (CC-BY-NC-SA).