In Mind the Brain, we have been showing that critical appraisal tools like risk of bias assessment for studies evaluating interventions and a vigilance for signs of confirmatory bias, p-hacking, and significance chasing are crucial in interpreting often untrusworthy scientific claims. Yet, these alone are not enough.
We have also been seeing the need to pay attention to the institutional context, like how journals decide what is publishable, and how universities require that professors prove their worth by publishing lots of papers and telling them where they should publish.
We need to look at the incentives for individual researchers. Do they get rewarded for telling like it is- publishing the fairest interpretation of all their studies or rather for claiming breakthrough, newsworthy findings even when the data don’t show that? We need to consider what is suppressed or radically distorted because of these powerful filtering processes. Or else place our faith in the fairness and thoroughness of the peer review process: it must be good science because it got through peer review.
We can’t understand what passes for science in positive psychology unless we grasp the larger context of the positive psychology community, the multimillion dollar industry associated with positive psychology, and incentives that the community and its industry offer to those claiming to provide the science of positive psychology.
Shaping what passes for science are the needs of thousands of positive psychology coaches competitively marketing their services. These coaches are themselves a market for positive psychology ”science,” and they promote their “science-based” products and services to individual clients and corporations. At both levels, claims become important of being more sciencey than competitors not sharing the brand of positive psychology.
A recent interview with a designated “positive psychology expert”, Lisa Sansom provides some fascinating insights into the sciencey branding of positive psychology coaching. Positive Psychology Coaching: 12 Urgent Questions Answered is available at PositivePsychologyProgram: Your One-Stop Positive Psychology Resource. At the outset, the interview promises to answer the questions
What is positive psychology coaching? How does it differ from regular coaching? When can I call myself a positive psychologist?
And more. In this blog post, I’m going to probe this interview to understand the distinctiveness of the positive psychology brand of coaching and its implications for what passes for science and evidence in positive psychology.
We will encounter some tensions. Calling oneself a coach does not require any background in psychology or research methodology. Yet coaches claim they interpret and apply scientific findings and promise that makes their brand better than the rest.
If coaches don’t have a background in psychology and the critical skills to interpret new findings, how are they going to do this? They depend on the eminence of those whom they consider scientists, not the actual evidence their research provide. Researchers may become gurus to an audience that cannot appreciate either whether authoritative statements by the researchers are faithful to their actual findings or whether any evidence is actually relevant to the pronouncements being made.
What a temptation! An audience that cannot tell the difference between reasonable and unreasonable interpretations of the evidence, but will pay more for interpretations that help them sell more of their product and services.
Being an authoritative source has rich rewards in terms of opportunities for lucrative trainings corporate talks and direct-to-consumer marketing of their “science.” But success in this market benefits from claiming stronger findings than the spin and confirmatory bias required for publication.
Positive psychology research comes out of social and personality psychology that already has rampant problems with hype, hokum, and unreproducible findings. Do the temptations of the positive psychology market increase pressures on psychologists doing relevant research to produce simplistic, but seemingly unambiguous answers ? Think of having to match reporting of findings to the wonder, drama, and magic of advertisements for positive psychology products.
Positive psychology articles rarely if ever has declarations of conflict interest. Yet, we know investigators’ financial stakes in obtaining particular outcomes lead to exaggerated and simply false claims. Do investigators seeking market claims in positive psychology further contaminate troubled areas of personality and social psychology with undisclosed conflicts of interest? In other areas of social science, there is growing appreciation for the need for routine declarations of conflicts of interest. Some areas have seen dozens of errata and correction notices to articles that previously did not have a declarations.
The interview quotes a chapter by Carol Kauffman, Ilona Boniwell and Jordan Silberman in giving a definition of positive psychology coaching:
“Positive Psychology Coaching (PPC) is a scientifically-rooted approach to helping clients. increase well-being, enhance and apply strengths, improve performance, and achieve valued goals. At the core of PPC is a belief in the power of science to elucidate the best.”
The interview keeps emphasizing that it is being rooted in science that distinguishes positive psychology coaching from its competitors.
So how does it differ from regular coaching?
On the surface, it might not look or feel much different to a client. However, what is different is that the PP coach continues his or her life-long learning in the field of positive psychology by staying engaged with the research, the literature, the researchers and other PP professionals.
The PP coach also adjusts his or her coaching techniques, methodologies, etc, accordingly when new findings are discovered. “Regular” coaches may not be as tied to the empirical evidence and research findings, and so their techniques and methodologies may change only as a function of their own experiences, or attending conferences where they learn from other coaches’ anecdotal experiences, or they may not change substantially at all.
Perhaps the one thing that is different, as I alluded to above, is that the PP coach also believes in staying close to the science and adjusting his or her approach (etc) accordingly. Coaches that are getting their PP from mass media books only are not getting the full richness and subtleties that are inherent in positive psychology research.
Yet no background in psychology is required to do this:
Overall, to be an effective PP coach or practitioner, one does not need a strong background in traditional psychology and one does not need to be a certified, qualified psychologist.
Even without a coach having a background in psychology,
the benefits to working with a PP coach who is well-trained and qualified are potentially that you will be drawing on a valid body of research (as opposed to just intuition and that individual’s personal coaching experience) and that your coach will know the why and wherefore of the practices, rather than just guessing that things might work for you.
Surfing around the PositivePsychologyProgram website, I encountered the free resource, 27 of Positive Psychology’s Most Fascinating Facts that advertised
To the point and easy to read (37 pages)
Written by academics, 100% science-based
More free PDF’s, Downloads, Videos…
Of course, I clicked on the
And opened to
Fascinating Fact #4: Positive psychology interventions have the power to reduce depressive symptoms.
Sin and Lyubomirsky’s meta-analysis is the single source. It is described as revealing
positive psychology really does increase wellbeing and sooth depression. Furthermore, the status of depression, the age of the participants and the intervention all had an impact on the effectiveness of the interventions. Because of this, clinicians are strongly encouraged to begin incorporating positive psychology techniques into their work.
You can find specifics here of my evaluation of Sin and Lyubomirsky. I used the same standards I would apply to any other meta-analysis. I found it to be substandard work:
Sin and Lyubomirsky provides a biased and seriously flawed assessment of positive psychology interventions. Uncritical citation of this paper suggest either subsequent authors are naïve, careless, or bent on presenting a positive evaluation of positive psychology interventions in defiance of available evidence.
But on to
Fascinating Fact #6: The principles and practice of positive psychology are relevant to brain injury rehabilitation.
Positive Psychology actually has the ability to foster posttraumatic growth, meaning it can make injury sufferers over-all happier (even more so than they were before). Positive psychology allows individuals to re-assess what is important in life, live more in the moment, identify what they are grateful for and to develop personal and intrapersonal goals for recovery. All this makes individuals with brain injuries more appreciative of all aspects of life and allows them to return to their social and physical lives faster.
These are patently ridiculous claims. They leave me thinking that we should all put in our advance directives that if we ever suffer traumatic brain injury, we must be protected from positive psychologists and coaches trying to help us to grow from the experience. And just what the hell do these coaches think they are doing in caring for persons with traumatic brain injury?
In the context of a great debate about positive psychology in cancer care, Howard Tennen and I concluded
We are at a loss to explain why positive psychology investigators continue to endorse the flawed conceptualization and measurement of personal growth following adversity. Despite [Chris] Peterson’s warning that the credibility of positive psychology’s claim to science demands close attention to the evidence, post-traumatic growth—a construct that has now generated hundreds of articles—continues to be studied with flawed methods and a disregard for the evidence generated by psychological science.
More recently, Patricia Frazier, Howard Tennen , and I published a commentary on Jayawickreme and and Blackie’s updated Posttraumatic Growth as Positive Personality Change: Evidence, Controversies and Future Directions. We concluded that a lot of research had accumulated but it did not change our skeptical assessment. We suggested a lot less, but better research was needed.
Anyone who assumes that psychological science will produce a set of 27 fascinating proven facts ready for application in interventions seriously misunderstands both science and psychological interventions.
Just look at any other area of psychological interventions. Research does not produce fascinating facts, but tentative findings, graded in terms of strength of evidence. That evidence is likely to be limited in quality and quantity and will probably have to be modified with new findings.
Taking a larger overview, we can expect that psychological interventions that are credible and structured will have modest differences among themselves and modest advantages over interventions that are simply supportive and delivered with positive expectations. And psychological interventions are most reliably effective when they are delivered to persons who are sufficiently distressed to register benefit.
The large literature concerning psychological interventions will be very disappointing to anyone seeking ways to produce dramatic change with simple interventions. Anyone or anything that guarantees this should be treated with great skepticism.
Look at the personality and social psychology research from which the positive psychology community draw. Findings are not robustly durable. Newsworthy dramatic breakthroughs typically prove to be false positives or simply nonsense. The shelf life of spectacular claims is increasingly shortened by critics waiting to show the tricks by which such magic was produced.
The positive psychology community may be collectively engaging in wishful thinking, but it attracts and richly rewards those who promise to fulfill the great hunger and pressing marketing needs for sciencey findings. And few in the community will understand the difference in what they get.
If the positive psychology community is serious about making a credible claim for the distinctiveness of their approach, I suggest that everybody drop the vague references to “science” and substitute “evidence-based.”
The “evidence-based” brand is subject to lots of abuse, but the label at least invites application of some well specified principles for deciding the extent to which claims are indeed evidence-based and grading of the evidence by noncontroversial, established criteria. And to keep a grounding in being evidence-based, interventions need to adhere to the procedures that were validated. This is not a matter of jumping from a correlational study with college students to claims of dramatic effects being achieved in everyday life, as so much of the positive psychology literature does. It is a matter of being faithful –having fidelity to the manualized procedures of the original study.
Or is all of this analysis for naught because the claims of positive psychology being more sciencey than the rest are just vapid advertising slogans and not to be taken seriously? Some researchers notably pitch their work to this waiting audience that lacks the critical skills to evaluate. Should we treat their scholarship as less serious or should we scrutinize it more for bias because of their undeclared conflicts of interest?
DISCLAIMER: I am grateful for PLOS blogs providing me the space for free expression. However, the views I present here are not necessarily those of PLOS nor of any of my institutional affiliations.