Sweetheart relationship between Triple P Parenting and the journal Prevention Science?

Update June 1, 2014, 8:50 pm US EDT: It has just been called to my attention that until December of last year, Robert McMahon was Editor of Prevention Science. He is a member of the Triple P Parenting International Scientific Advisory Committee . This makes the possible relationship between Triple P and the journal, including undisclosed conflicts of interest being let slip by all the more troubling.

Not one, but two problematic papers…..

shark-life-guardImagine the CEO of a pharmaceutical company writing a spirited defense of the need for drug companies to be involved in evaluations of their products. Drug company involvement is trumpeted as a means of ensuring the quality of the evaluation and the dependability of the results. Independent evaluation is seen as carrying the risk of skepticism or motivation to discredit a product, tainting the results.

Imagine that this occurred in an article that only in passing mentioned the extensively documented effects of conflicts of interest on published reports of the efficacy of drugs.

Finally, this article was published in the journal with an explicit requirement for disclosure of conflicts of interest, but with no disclosure.

But, no, I am not talking about an article by the CEO of a pharmaceutical company. The article was written by promoters of Triple P Parenting (3P), who consistently publish evaluations of their own treatment without conflicts of interest statements.

Sanders, M. R., & Kirby, J. N. (2014). Surviving or Thriving: Quality Assurance Mechanisms to Promote Innovation in the Development of Evidence-Based Parenting Interventions. Prevention Science, 1-11.

Going to the website of their university, one can find an intellectual property statement that provides for lavish rewards for those who develop products such as psychological treatments.

 Any Net Proceeds of Commercialisation received by the University pursuant to this Policy will be disbursed as follows:

(a) 1/3 to the Contributors determined in accordance with the relevant Procedures;

(b) 1/3 to the University, which will be retained by a University Commercialisation Company where the Commercialisation of the IP occurs through that University Commercialisation Company; and

(c) 1/3 to the relevant faculty or institute of the University.

3P promoters appear to be in a position to make lots of money from their treatment.

Prevention Science has a person on its editorial board actively involved in promoting 3P  Ronald J. Prinz. It is also the journal of the Society for Prevention Science. At the society’s website, you can see

The Society has an interest that its members, in their research, teaching, practice and service, shall be alert and attentive to situations that might cause a conflict of interest (or the appearance of a conflict), and to take appropriate action either to prevent such a conflict or to disclose it to all appropriate parties.

Such conflicts of interest may arise out of commitments involving honoraria, consultant relationships, participation in a speakers bureau, stock holdings or options, royalties, ownership of a company or patent, research contracts or grants, and, in some instances, being an official representative of another organization.

Key Guiding Principles

……SPR members should disclose relevant sources of financial support, and pertinent personal or professional interests, to their employers or clients, to the sponsors of their professional work, and in public speeches and written publications.

Conflicts of interest, even substantial financial ones, do not necessarily discredit an evaluation. Undisclosed conflicts of interest do so because they prevent readers from independently evaluating whether they should accept the claims being made.

You might be motivated to write a letter to the editor of Prevention Science complainingpottery barn2 about this lapse in its disclosure policy. But don’t waste your time. Prevention Science summarily dismisses letters to the editor as a matter of policy. The journal does not subscribe to the so-called Pottery Barn Rule: journals should have a means of readers calling attention to the inevitable lapses in peer review and unreliability in published claims.

What might the promoters of 3P have been up to in writing this article?

The evidence-supported status of 3P has been questioned, starting with a meta-analysis and my commentary that demonstrated that the bulk of randomized trials evaluating 3P consisted of underpowered, methodologically flawed studies conducted by persons with conflicts of interest.

After having failed to suppress publication of this meta-analysis, promoters 3P mounted a response, including circulating on the web a meta-analysis of their own that they apparently had to withdraw, before eventually publishing it in Clinical Psychology Review.

To focus on the virtues of developer involvement shifts discussions away from broader issues of the well-established taint of financial conflicts of interest. Just as financial conflicts of interest are not limited to the developers of drugs, conflicts of interest can be manifested by those who promote the treatments and benefit from the treatment being declared evidence-supported.

With conflict of interest statements so infrequent in the 3P literature, is already quite difficult, if not impossible to establish financial conflicts of interest in those publishing evaluations of the treatment, other than from those obviously associated with its development.

This article muddies the water and claims that there is a demonstration that conflict of interest, redefined as developer involvement, does not matter.

A recent systematic review and meta-analysis of Triple P (Kirby and Sanders 2014) included 101 studies, 68 of which were randomized controlled trials of specific programs within the Triple P system (e.g., the Level 4 Group Triple P program), of which 66 were peer-reviewed publications. A range of moderators were examined including developer involvement. Developer involvement was classified into two categories: (a) any developer involvement or (b) no involvement. Seventy papers were categorized as having some level of developer involvement, whereas 31 papers had no developer involvement. Using structural equation modeling, the meta-analysis revealed that both developer led studies and studies with no developer involvement produced significant small to medium effect sizes for a range of child and parent outcomes (Kirby and Sanders 2014). This is the first time a meta-analysis has examined the level of developer involvement as a moderator variable for potential intervention effects. Importantly, both developer led and independent evaluations showed similar positive effects on a range of child and parent outcome measures.

If one goes to the reference list of the Prevention Science article, the Kirby and Sanders (2014) article that is cited does not contain these data. So, readers cannot evaluate these extraordinary claims and reviewers apparently did not check either.

Why did  the reviewers at Prevention Science not check out a claim that is the crux of the credibility of article they accepted? We can only speculate. The article circulated on the web by 3P promoters and then withdrawn included 101 studies. Maybe the reference was to that. And the subsequent Clinical Psychology Review meta-analysis includes 101 studies. But it has different authors. And, as I have been documenting at PLOS Mind the Brain, it is a horribly conducted and reported meta-analysis, aside from its lack of statement of conflicts of interest.

One slip up in the review and publication of a questionable manuscript with undisclosed sweetheart dealconflicts of interest do not necessarily indicate a sweetheart deal between the promoters of a treatment and a journal.

But then there is a more egregious article in the same journal:

Prinz, R. J., Sanders, M. R., Shapiro, C. J., Whitaker, D. J., & Lutzker, J. R. (2009). Population-based prevention of child maltreatment: The US Triple P system population trial. Prevention Science, 10(1), 1-12.

The article describes outcomes of the Triple P System Population Trial (TPSPT) in South Carolina, important as one of only five tests of 3P on a population basis and the only one cnducted in the United States. It carries considerable weight, has gotten considerable publicity, and has proven decisive in designation of 3P as evidence-supported.

The study was not pre-registered at a publicly accessible site. However, the details of the study were published in Clinical Psychology Review ahead of the Prevention Science article, but years after data collection had begun. The Prevention Science article cites this.

Prinz, R. J., & Sanders, M. R. (2007). Adopting a population-level approach to parenting and family support interventions. Clinical Psychology Review, 27(6), 739-749.


Professor Manuel Eisner

A working paper by Manuel Eisner, Professor of Comparative & Developmental Criminology, Deputy Director of the Institute of Criminology, Cambridge University points to some serious discrepancies between what was reported in the Prevention Science article and what appeared in the Clinical Psychology Review. Discrepancies include contradictory description of outcome data collection and analysis and designation of the primary outcome. There is the strong suggestion of selective reporting.

Eisner, M. (2014). The South Carolina Triple P System Population Trial to Prevent Child Maltreatment: Seven Reasons to be Sceptical about the Study Results. Violence Research Centre Working Paper. University of Cambridge Institute of Criminology.

A table from the working paper detailing discrepancies is reproduced below.

Table 1       TPSPT study design as reported in Prinz and Sanders (2007) and findings reported in Prinz et al (2009).

Criterion Reported in
Prinz and Sanders (2007:746-748)
Reported in
Prinz et al. (2009)
Randomization Pair-wise matching Stratified random assignment
Age range 0-7 0-8
Population Survey Annual assessments A pre and a post intervention survey
Baseline reference period for official data 5 years before the intervention Last year before the intervention
Outcomes – Survey based Parent knowledge of Triple PParent involvement in Triple PRelationship between demographics and participationParenting practicesParent confidence and stress
Child maladjustment
Reported[Not reported][Not reported][Not reported][Not reported]
[Not reported]
Outcomes – Archival Records Reported CMSubstantiated physical maltreatmentSubstantiated neglectSubstantiated sexual abuseSubstantiated overall CMOut-of-home placements[not planned] (supplementary analysis)[not reported][not reported][not reported]ReportedReportedChild CM injuries

Eisner raises questions about the basic credibility of the results reported in Prevention Science:

 TPSPT claims large benefits from treatment exposure by about 10% of a universal target population. Yet TPSPT is a short intervention that mainly entails tip sheets, seminars to large audiences, and brief consulting sessions. So the claim is that this modestly intensive intervention for any one family is sufficient to reduce child maltreatment by between 28% (substantiated child abuse) and 44% (out of home placements) at the population level when about 17% of families receive this modestly intensive intervention. While logically not impossible, the claim lacks face validity.

SPR has an explicit relevant policy

 SPR members should not misrepresent their procedures, data or findings. They should report their findings fully and should not omit relevant data. They should report study results regardless of whether these results support or contradict expected outcomes. Within reasonable limits, they should attempt to prevent or correct any substantial misuses of their work by others.

Two questionable articles slipping through the peer review process without needed disclosures of conflict of interest, and a promoter of P3 on the review board, do not establish a sweetheart relationship. But it is sufficient evidence for Prevention Science to adopt the Pottery Barn Rule and allow letters to the editor identifying lapses in peer-reviewed. And it illustrates problems in accepting promoters’ evaluations of their own treatments.