For a while I have been advocating that journals publishing mainly personality and social psychology research become more informed about clinical trials.
Editors and reviewers could better keep bad science out of an already untrustworthy literature if there were clearer and more widely disseminated standards for conducting and reporting research.
In a post at PLOS Mind the Brain, I described how my colleagues and I had to obtain and reanalyze the data to show Barbara Fredrickson’s study was a null trial with no benefits to participants who were told to meditate. With clearer reporting in the published paper, readers could have readily seen the study was flawed and did not get the wondrous results that the authors claims. We would not have had to waste time reanalyzing the data.
More recently, I provided a review of the Amy Cuddy power posing paper published in the same journal. The outlandish claim in the abstracts had no basis in what was done in the study and what was found:
That a person can, by assuming two simple 1-min poses, embody power and instantly become more powerful has real-world, actionable implications.
Despite launching a seven-figure package of merchandise such as corporate talks and workshops, this sentence capped abstract that is a textbook example of inadequate reporting.
Once I delved into the paper itself, the reporting of methods and results continued to be poor. Yet, I was able to identify a grossly underpowered, low-quality study with improperly analyzed and misinterpreted data. The paper should have been outright rejected. Instead, replicationados who suspected a bad study had to go to the trouble of organizing an attempt to replicate an effect size that had no justification for being in the literature. No surprise that they got null findings except for a weak finding for a subjective self-report measure that is highly susceptible to experimenter expectancy and demand characteristics. What another waste of time.
My blog post stimulated lively discussion in social media. Among the interesting responses I got was:
I get your point, but the purpose of the clinical trial evaluation criteria are to protect patients from engaging in interventions for health conditions that may be ineffective or even harmful. As far as I understand it she wasn’t claiming the “power pose” should be used clinically to treat any mental health conditions? Yes, the study was flawed but I don’t know that it is reasonable to hold all laboratory manipulations like this to clinical trial standards, particularly for social psychology studies which are very difficult to fund. Most of these types of studies are done with very limited funding and resources. Once her idea took off and became popular, she had a responsibility to followup on the effect and establish it as reliable, which she did not. But I think we don’t want to discourage early career researchers and students from doing pilot work with the limited resources they have to look at effects like this without fear of being attacked repeatedly over a period of years. There are so many flawed studies out there, why keep focusing on this one individual and this one study?
You raise excellent questions. Clinical trials are simply experiments. Because the results matter for people, standards have gotten clearer and tighter for describing what is done and for the kinds of inferences that can be made.
Applied to Amy Cuddy’s power post study, the standards readily revealed that strong inferences were made from a very weak experimental design. Apparently reviewers missed this.
The need for pilot and feasibility studies is appreciated in a clinical trial literature. But the expectation is that researchers do not infer effect sizes from studies too small to reliably generate them.
Instead of doing another study, Amy Cuddy hired a speaking agent and commanded fees of up to $100,000 per appearance. She claimed her research results were very strong and that she and shown that two minutes of behavioral manipulations can influence mind-body relations.
In this sense, she is a very bad model for early career investigators, rushing to market and profit from her work, rather than doing the hard work of making it more rigorous.
Amy Cuddy’s adviser is quite powerful and together they viciously attacked critics who had pointed to the strong interpretations being made from weak findings.
My focus is a bit different than some of Cuddy’s critics. She is selling lay audiences, mostly women, bogus ideas about how easy it is to change how they behave and what happens to them. I think it is bad for psychology that people be given the illusions that is so easy to make change, as well as for a psychologists to be associated with profiting from pseudoscientific claims.
Educate yourself, be a better role model, and teach your students and other co-authors well.
Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMC Medicine. 2010 Mar 24;8(1):18.
Hopewell S, Clarke M, Moher D, Wager E, Middleton P, Altman DG, Schulz KF, CONSORT Group. CONSORT for reporting randomized controlled trials in journal and conference abstracts: explanation and elaboration. PLOS Medicine. 2008 Jan 22;5(1):e20.
Thabane L, Hopewell S, Lancaster GA, Bond CM, Coleman CL, Campbell MJ, Eldridge SM. Methods and processes for development of a CONSORT extension for reporting pilot randomized controlled trials. Pilot and Feasibility Studies. 2016 May 20;2(1):25.