A bad abstract is good enough to be published in Journal of Experimental Psychology: General

another Gilbert adEffective abstracts tell a good story. The opening sentences creates an interest or tension that is resolved in carefully crafted final sentences. Often the wrap up of an abstract becomes how a paper is remembered, echoed in media coverage and  subsequent citations.

Some abstract succeed in telling a good story by distorting or suppressing details of what was done and what was found . Some manuscripts find their way into prestigious journals because the editor and reviewers are lulled by what they read in the abstract and don’t notice that it is discrepant with what is actually reported in the paper. I recently showed that with a study of cognitive behavior therapy for patients with psychosis that was published in the British Journal of Psychiatry. There were consistently null findings, but the abstract stated otherwise.

 Abstracts take on an importance because they are often the only contact that readers have with the paper. Many papers succeed in garnering citations based on spin in the abstract, rather than the actual findings. Apparently people who cited papers don’t necessarily meet them.

In my writing course, I teach that authors have a responsibility for informative abstracts in which readers in providing some basic details in terms of the specifics of numbers of subjects in experiments and the primary results that were found.

Although it is generally ignored, CONSORT has a checklist to evaluate abstracts in terms of whether the details of a clinical trial are presented in a way that allows evaluation. Investigators conducting systematic reviews and meta-analyses often gather large numbers of papers as candidates to include and screen most of the papers out based on the abstracts. A misleading abstract can have a bad effect on how a literature is appraised, with key studies being dropped that should have been retained for the meta-analysis.

Even when studies are not reporting a randomized controlled trial, CONSORT can be adapted to a useful guide and template for the minimum details of these be reported

A recent abstract with  Harvard psychologist Dan Gilbert (yes, the guy who tries to sell you financial planning on American television) as an author provides an example of an abstract that succeeded in getting published in the prestigious Journal of Experimental Psychology: General but that left out some basics. If some specifics had not been left out of the abstract, I would not have given the abstract much attention and I wonder if it would’ve been published.

Here is the abstract:

We suggest that when confronted with evidence of their socially inappropriate thoughts and feelings, people are sometimes less likely—and not more likely—to acknowledge them because evidence can elicit psychological responses that inhibit candid self-reflection. In 3 studies, participants were induced to exhibit racial bias (Study 1) or to experience inappropriate sexual arousal (Studies 2 and 3). Some participants were then told that the researcher had collected physiological evidence of these mental transgressions. Results showed that participants who were told about the evidence were less willing to acknowledge their mental transgressions, but only if they were told before they had an opportunity to engage in self-reflection. These results suggest that under some circumstances, confronting people with public evidence of their private shortcomings can be counterproductive.

So we are told that there are three studies and that the results converge in supporting a storyline: people being less willing to admit their mental transgressions except for when they have an opportunity to self reflect.

Study 1

We are not given many details of the studies, but it matters to me that Study 1 involved comparing 39 participants who were led to believe the experimenter had physiological data with which to evaluate their self-report to 35 participants who had not been told that.

The study actually recruited 93 participants, but a decision was made to exclude  11 African-American participants, but to retain:

74 participants in the data set (38 who identified as White, 20 who identified as Asian American, 8 who identified as Hispanic, 6 who identified as Other, and 1 who preferred not to indicate a race).

Another 4 participants were excluded because they didn’t believe the experimenters actually had the physiological data.

Gilbert and his colleagues could choose from 15 outcome measures and  emphasized two that were significant at p<.05.

“To what degree did you feel more threatened by the African-American mug shots than by the Caucasian mug shots?” p <.041

And

Positive affect (but not negative affect) p<.040

Results of Study 1 were interpreted as:

Participants in Study 1 who believed that an experimenter had evidence of their racial bias were less willing—and not more willing—to acknowledge that bias.

Study 2

In Study 2, heterosexual male participants watched a video tape that they were told was surreptitiously obtained of a woman trying on bathing suits t. There were 4 groups that varied in terms of whether experimenter inform them physiological data have been collected on them: No evidence (n = 18), Immediate evidence (n = 21), Delayed evidence (n = 18), and No reflection (n = 20).

The total of 77  participants included analyses was reduced from a larger number who had participated in the study:

Before examining the data, we omitted the data from two participants who expressed suspicion that they were being monitored as they watched the video, one who appeared to be intoxicated, and seven who identified themselves as gay.

Of the nine measures obtained from these participants, the experimenters emphasized 2:

How sexually aroused did you feel while watching the video?  F(1, 73) =5.25, p= .002

How attractive did you find the woman in this video? F(1, 73) = 3.28, p = .026

Study 3

The third study was a replication of what the experimenters terms Study 2’s primary findings with a larger sample. 64 participants were recruited, but after participation,

We omitted the data from two participants who expressed suspicion that they were being monitored as they watched the video, one who did not believe the cover story about the video, and four who identified themselves as gay.

Results for 1 of the 10 measures were emphasized:

How sexually aroused did you feel while watching the video?  t(57) = 1.96, p =.055

The  discussion in the article

Everyone has thoughts and feelings of which they are not proud. No one wants to admit to feeling frightened when a Black man walks toward them, or aroused when a student walks away. So what would ever compel people to acknowledge such unseemly private reactions? One might expect public evidence to do the trick. People may be tempted to deny having had a racist thought or an inappropriate sexual impulse, but when there is evidence of these mental transgressions, they may be forced to acknowledge them.

And yet, in our studies, precisely the opposite happened. Our participants were less willing to admit to being racially biased or to experiencing inappropriate sexual arousal when they were told that another person had objective evidence of their thoughts and feelings.

Our discussion

I think that if the abstract had accurately reported the number of participants and the statistical findings, I would not have found the article interesting enough to read. If I were an editor, I think an informative abstract would’ve been enough for a desk rejection. But here we are discussing the abstract as published in the Journal of Experimental Psychology: General.

Hold on, Coyne! Are you presenting  this abstract as an example that we should follow?

No, I’m saying as an example of a bad abstract that apparently succeeded. It’s up to you what you do.

I personally think that these authors took the low road and were rewarded. You can take the high road, but maybe you won’t get such insipid studies with weak results published in a prestigious psychology journal.

But there is just so much wrong with this study. We have to take the authors’ word that they didn’t peek at their data before dropping subjects. We can see they had such flexibility in design and analysis. If the authors corrected for the number of statistical tests that they performed, I doubt there would be anything left. We can see that they are making much too much of statistical significance in sample too small to generate substantively meaningful effects.

 Hey, do you really want to argue with a paper that was peer-reviewed? The authors succeeded in getting published.

Would you buy financial planning recommended by this Gilbert on television?

That’s another question.

And you say that an abstract should create interest in tension in the opening that is resolved in the wrapup. I think the question was uninteresting and I thought the wrapup was simply a Barnum statement, especially with the qualifier “under some circumstances.”

Hey, it was good enough for Journal of Experimental Psychology: General. They are a very exclusive journal that rejects a lot of papers.

I would like to see the papers that they reject, but I would settle for a look at the reviewers comments on this one.

I’ll soon be offering a web-based science writing course, Getting your manuscript out for review: a strategic perspective. One session will be about writing effective and ethical abstracts. You can sign up to get advance notice at @CoyneoftheRealm.com.

So all this is a pitch for your writing course?

It’s a sample of what I am covering and you can decide. I have to monetize what I’m doing, and I don’t want to sell financial planning.

Update: Dan Lakens remarked on Facebook:

I think it is irresponsible of the senior authors to have let the first author publish this. Maybe the first author does not want to continue in science, but if they do, papers like this hurt your chances of getting a job. You might think more people count publications than read papers, and you are right. But even more researchers hear such criticism at bars or on social media than count publications when hiring someone.