We started out with what we thought was a solid rationale for how we should recruit cancer patients for a clinical trial. We recognized that the patients who were recruited needed to be sufficiently distressed to register an effect for the problem-solving therapy (PST) we were evaluating.
So we invested in efforts to systematically screen patients for distress and try to recruit those who were distressed.
But we ended up with an article warning others not to try what we did. And the trial was never completed. You can find an abstract here or drop me an email at jcoynester AT gmail.com with problem-solving therapy in the subject line and I will send you a copy.
Studies of psychotherapy for cancer patients generally get weak effects and we thought this was mainly due to researchers not having recruited patients who were distressed enough to benefit from the intervention.
If a researcher just put a call out to cancer patients for a study offering psychotherapy, the researcher often ends up recruiting a sample that is not particularly distressed.
Contrary to expectations, most cancer patients are not suffering clinically significant psychological distress. They are drawn to psychotherapy studies not because they are excessively distressed, but often because they want some support or to learn a set of tools to manage what they think is ahead.
That makes sense, but poses problems if researchers’ goal is to demonstrate that the intervention reduces distress. Researchers may end up with the intervention looking like it is not effective when the story would be quite different if the intervention had tested it in an appropriate sample.
With Steve Lepore, I had found in an earlier review of reviews that psychotherapy for cancer patients showed weak and often no effects because of the way in which patients had been recruited. In a later systematic review with another group of colleagues, I concluded that psychological interventions had at least modest effects when tested in appropriate populations.
Meanwhile, Dutch cancer centers were getting ready to implement routine screening for distress, having adopted a set of guidelines recommending it. Screening was not yet in place, we thought we could implemented screening in select settings with all the patients coming in and offer participation in the randomized trial of problem-solving therapy, among other possibilities for services.
We thought PST would be appealing because it was quite practical, there was evidence that had some effectiveness for distressed or depressed cancer patients, and the specialized treatment was not otherwise available. The study was offering it for free and without a long waiting list, both features we thought were attractive.
What we hoped to accomplish
We aimed for at least 50 patients per group.
That is a hefty effect size but it is actually lower than what the original authors claimed for their study.
Given possible drop-out and non-response, a sample of 60 patients in each group (120 in total) was planned.
What we did…
We set up screening at three medical settings implemented screening of patients who had completed cancer treatment and two months later using the Hopkins Symptom Checklist-25 and one question about need for services. We trained staff about the screening procedure and how to promote our study.
Patients who scored in the distress ranged and who indicated indicating wanting services (“Would you like to talk to a care provider about your situation?”) were interviewed.
During the screening and assessment of need for services, patients were not yet informed about our study. Eligible patients were offered the possibility to participate during the interview.
Consenting patients were randomized to PST or a waiting list.
if they indicated they wanted to talk about a problem, they got a chance to have a discussion with a nurse even if patients were not distressed, The Dutch guidelines for screening for distress are different than in a lot of countries where patients may not get to talk to anyone if they don’t score high enough in distress.
What we found
In the first round,, 366 of 970 patients (37%) scored above above the cut off for clinically significant distress.
At the second round, 208 of 689 screened patients (30%).
Adding together the two screenings 423 patients reported distress, of whom 215 patients indicated need for services.
Only 36 (4% of 970) patients consented to trial participation.
We calculated that 27 patients needed to be screened to recruit a single patient, with 17 hours of time required for each patient recruited.
Why didn’t distressed patients want services or be in our trial?
41% (n= 87) of 215 distressed patients with a need for services indicated that they had no need for psychosocial services, mainly because they felt better or thought that their problems would disappear naturally.
Another 17% (n= 36) reported that they already received psychosocial services.
35% (n=74) reported an unmet need for psychosocial services. Of these patients, 27 declined participation because they preferred a different type of services, nearer home, or less time-consuming. Another 7 patients were ineligible.
Finally, 36 patients were eligible and willing to be randomized, representing 17% of 215 distressed patients with a need for services.
This represents 8% of all 423 distressed patients, and 4% of 970 screened patients.
What we did next
We stopped recruitment of patients for the trial. At the rate we were going. We would’ve needed to screen 3240 patients.
We had truly hope to be able to conduct a trial, analyze results and report that problem-solving therapy was effective in reducing distress when administered to an appropriately selected sample of patients who are indeed distressed.
Instead we rode up and published our paper this should serve as a warning to others who might naïvely assume that most cancer patients are distressed and those who are distressed one psychological services to improve their well-being.
You can certainly find a lot of declarations in the published literature that support this expectation. But the existing literature can prove misleading.
I’ve have lots of critical things to say about the journal Psycho-Oncology (1,2) and will continue to do so. But in this instance, I appreciate that they were willing to publish a study that not only did not get results, it did even not reach its anticipated sample size. It’s just as important to publish such studies as studies that attain their expected accrual of patients and obtain positive results.
The effect size that we anticipated for problem-solving therapy was based on a study published in a peer-reviewed journal. It seemed exceptionally high.
You generally can’t expect results of a replication study will be as strong as an original study that includes the developer of a therapy. This is called the investigator allegiance effect. I called it the “voltage drop “that you should expect when you go from studies conducted by proponents of a therapy to studies conducted by investigators who are hopeful, but open to weaker results.
But there was something very unfair going on with this particular original study. In the past couple years, too late to warn our PhD student, at least four meta-analyses (1,2,3,4) have tried to include this study. In each instance, it was such an outlier that it was appropriately marked for exclusion. The effect claimed for problem-solving therapy was just too good to be integrated with findings from other studies.
I’m left annoyed that our PhD student was left with unrealistic expectations from the published literature about what problem-solving therapy would accomplish. But she also had an unrealistic expectation about how to get an appropriately distressed sample to conduct a clinical trial of a psychological intervention. Hopefully our paper will serve as a warning to two other investigators, including other students trying to getting their dissertations completed in a reasonable length of time.
But upon reflection on this whole experience, I think this something more sinister going on in studies of psychological interventions for cancer patients. First, there is a persistent assumption that all cancer patients are distressed and therefore can benefit from psychological intervention. This leads to continued studies in which patients are recruited with too low an average level of distress to register a benefit for the intervention in fairly conducted and reported statistical analyses.
Second, there is an unrealistic assumption that most patients who are distressed will want a psychological intervention. Many are aware that there distress does not represent a mental health problem and will not consider an intervention that assumes that it is. Consequently unrealistic expectations are set for the ability of researchers to recruit sufficient samples. The net result is the literature is cluttered with psychological studies that fail to accrue sufficient numbers of patients and are therefore underpowered and otherwise compromised in their designs.
Third, there is a gross confirmatory bias as to the efficacy of interventions, amply demonstrated in the problem-solving therapy study that cannot be replicated.
So, we have researchers undertaking studies with samples that are unlikely to register an effect of interventions. We have studies that have too small sample size to demonstrate an effect if it were there. And then we have publication practices that require that studies report positive findings, even if the reports depend on biased analysis and reporting of results. And we have editors and reviewers all too eager to overlook the tricks that are required to produce positive findings. As Steve Lepore and I argued in our review of reviews, the literature concerning psychological interventions for distress among cancer patients is even worse than it first looks.