This blog post is truly a quick thought that needs refinement. I invite comments and criticisms, and particularly suggestions for more links and details to what I believe is an important message. Please correct me where you believe I am wrong or exaggerating.
Association for Psychological Science (APS) has strived for the distinction of being the more scientifically oriented of the two main organizations for American psychologists. APS was born out of an intense political struggle within its current rival, American Psychological Association (APS). Although historically the APA had eminent scientists as its presidents, a group of ambitious entrepreneurial psychologists were taking over. Their goal, and this “dirty dozen” succeeded, was to establish and accredit lucrative, freestanding professional training programs conferring PsyD degrees.
Once scorned, and confronted by seemingly insurmountable odds, the following fourteen professional psychologists, affectionately called The Dirty Dozen, changed for all time the face of the American Psychological Association (p. 270)
Unlike the existing PhD degrees, these degrees did not require dissertations, but could serve as the basis for licensure of clinical psychologists. These professional training schools de-emphasized research, as compared to traditional PhD programs, and the graduates were going to quickly make up the bulk of new members to APA. This made the previously dominant research-oriented psychologists uncomfortable. The shift in power was among the reasons many left. Some formed an alternative, first known as the American Psychological Society.
APS was politically and financially disadvantaged by its size, but also by the existing prestige of APA journals. Offered at considerable discounts as part of the privileges of membership, they were an incentive for sticking with APA.
The stage was set for an intense competition between APS and APA for the claim to having the most prestigious journals.
Prestige is quantified by Journal Impact Factor (JIF). Despite the numerous deficiencies of JIFs has measures of scholarly accomplishment, they are widely accepted as measures of scientific accomplishment by hiring and promotion committees of universities and funding agencies.
Candidates competing for jobs, promotion, and grants can be quantitatively compared on the basis of the JIFs of the journals in which they publish, even if these numbers are distorted measures of actual accomplishment and easily gamed.
The exact algorithms by which JIFs are calculated are obscure. The JIFs reported by the authoritative INCITES Journal Citation Reports cannot be precisely replicated. However, JIFs are somehow a quantification of the number of times on average a paper is likely to be cited within two years after it appears in print.
The validity of JIFs as predictors of how many times a particular paper will be cited is at its lower level since JIFs first came into use.
JIFs are easily gamed.
The notion of papers being published at a particular point in time is tied to an earlier period in which articles were primarily printed on paper, bundled in journals that were mailed an ultimately bound in volumes on library shelves.
The advent of Internet-based open access journals with immediate availability on the web posed a challenge to traditional journals of the for-profit publishers and traditional professional organizations.
There was an open access advantage the traditional journals had to overcome. The response to this challenge was the creation of “early view” articles available on websites before articles were actually assembled into paper journals.
JIFs somehow calculated by the number of times a paper cited within two years after it “appears” in a journal. So, availability of “early view” articles could start the clock early in terms of accumulating citations before paper journals actually were physically available.
Researchers who will potentially cite a particular article X can quickly turn out another article Y with a citation of X that will contribute to the JIF of the Journal in which article X appeared.
The key is for these researchers to become aware as quickly as possible of the availability of “early view” articles that are candidates for citing.
Press releases and other notifications in the social media become particularly important for getting the articles cited that will raise JIFs.
In the competition between APS and APA, press releases become particularly important for raising JIFs.
But not just press releases, rather, press releases that report innovative and breakthrough studies that will get echoed and churnaled by lazy journalists across social and conventional media.
The innovative and the breakthrough study, or at least one that can be tortured into having the having these characteristics, gets favored over more transparently reported, trustworthy and durable studies.
Notoriously, an outgoing editor of APS’s Psychological Science proudly declared that he had rejected over 6000 submissions in his five years as editor without the manuscript is going out to reviewers
Potential authors also wondered why papers are declined without full review (i.e., “triaged,” reflecting a policy to devote most editorial resources to papers that were considered stronger candidates for publication). That was an easier question to answer: Given the volume of papers that fell in this category (more than 6,000 submissions over 5 years), a few prototypes emerged. We even gave them names (which reflected the age of the editorial staff):
At the top of his three reasons was:
The Pink Floyd Rejection: Most triaged papers were of this type; they reported work that was well done and useful, but not sufficiently groundbreaking. So the findings represented just another brick in the wall of science.
Psychological science has no monopoly on “Pink Floyd rejections.” APA’s Journal of Personality and Social Psychology relies on the strategy for its competition with APS, but its editors just do not brag about it.
Bricks-in-the wall studies are denigrated. These include most research that is well conducted and transparently reported. Also included are null results and replications of previously published studies, particularly those that do not yield positive results, are not welcomed.
But such “bricks in the wall” are likely more trustworthy than the over 50% of Journal of Personality and Social Psychology and Psychological Science articles evaluated in the Open Science Collaboration’s (OSC) Replication Project: Psychology that did not reproduce with the same strength of effects.
APS and APA are in a vicious competition to accumulate citations by promoting hype. This contributes to the reproducibility crisis in psychology for which replication of initiatives are showy, but ineffectual solutions.
I have no interest in restoring the prestige of APA after lots of blows to it, including its protection of psychologists who facilitated and participated in “enhanced interrogation.”
But I’m pained by how its competitor for the loyalty of psychologists is so damaging itself by deliberate questionable publication practices. It is soiling its brand with questionable and even junk science. How about we get a hashtag campaign going for APS – #stopthehype.