A physics journal has informed an embattled rocket scientist that it will retract three of his papers, citing concerns raised by the retraction of another of his papers last year.
All three articles appear in Physics of Fluids, published by AIP Publishing, and describe a phenomenon called “Sanal flow choking.” As we reported last year, some scientists have denounced the concept as “absolute nonsense.” The researcher who coined the phrase is the lead author on all papers, V.R. Sanal Kumar, a professor of aerospace engineering at Amity University in New Delhi.
Driving those headlines was a December 2014 study in Science, by Michael J. LaCour, then a Ph.D. student at the University of California, Los Angeles, and Donald Green, a professor at Columbia University.
Researchers praised the “buzzy new study,” as Slate called it at the time, for its robust effects and impressive results. The key finding: A brief conversation with a gay door-to-door canvasser could change the mind of someone opposed to same-sex marriage.
By the time the study was published, David Broockman, then a graduate student at the University of California, Berkeley, had already seen LaCour’s results and was keen to pursue his own version of it. He and fellow graduate student Joshua Kalla had collaborated before and wanted to look more closely at the impact canvassing could have on elections. But as the pair deconstructed LaCour’s study to figure out how to replicate it, they hit several curious stumbling blocks. And when they got a hold of LaCour’s dataset, or replication package, they quickly realized the results weren’t adding up.
PLOS One has retracted a 2011 paper first flagged for image issues 11 years ago. The retraction marks the fourth for the paper’s lead author, Gabriella Marfè of the University of Campania “Luigi Vanvitelli,” in Caserta, Italy.
Elisabeth Bik flagged the article on PubPeer in 2014 for apparent image manipulation and duplication in six figures. In a 2019 email to PLOS staff, pseudonymous sleuth Claire Francis drew attention to Bik’s findings. The journal retracted the paper on May 6 of this year.
The Center for Scientific Integrity, the parent nonprofit of Retraction Watch, has launched a new initiative to investigate and rapidly disseminate problems in the medical literature that directly affect human health.
Thanks to a $900,000 grant from Open Philanthropy, the Medical Evidence Project will leverage the tools of forensic metascience — using visual and computational methods to determine a paper’s trustworthiness — to rapidly identify problems in scientific articles, combined with the experience and platform of Retraction Watch to disseminate those findings.
“We originally set up The Center for Scientific Integrity as a home for Retraction Watch, but we always hoped we would be able to do more in the research accountability space,” said Ivan Oransky, executive director of the Center and cofounder of Retraction Watch. “The Medical Evidence Project allows us to support critical analysis and disseminate the findings.”
For years, sleuths – whose names our readers are likely familiar with – have been diligently flagging issues with the scientific literature. More than a dozen of these specialists have teamed up to create a set of guides to teach others their trade.
The Collection of Open Science Integrity Guides (COSIG) aims to make “post-publication peer review” more accessible, according to the preprint made available online today. The 25 guides so far range from general – “PubPeer commenting best practices” – to field-specific – like spotting issues with X-ray diffraction patterns.
Although 15 sleuths are named as contributors on the project, those we talked to emphasized the project should be largely credited to Reese Richardson, the author of the preprint.
Barbara Smaller/The New Yorker Collection/The Cartoon Bank
President Trump recently issued an executive order calling for improvement in the reproducibility of scientific research and asking federal agencies to propose how they will make that happen. I imagine that the National Institutes of Health’s response will include replication studies, in which NIH would fund attempts to repeat published experiments from the ground up, to see if they generate consistent results.
Both Robert F. Kennedy Jr., the Secretary of Health and Human Services, and NIH director Jay Bhattacharya have already proposed such studies with the objective of determining which NIH-funded research findings are reliable. The goals are presumably to boost public trust in science, improve health-policy decision making, and prevent wasting additional funds on research that relies on unreliable findings.
As a former biomedical researcher, editor, and publisher, and a current consultant about image data integrity, I would argue that conducting systematic replication studies of pre-clinical research is neither an effective nor an efficient strategy to achieve the objective of identifying reliable research. Such studies would be an impractical use of NIH funds, especially in the face of extensive proposed budget cuts.
The authors of an article linking scores on a “wokeness” scale and mental health issues are blaming political bias for the retraction of their paper in March following post-publication peer review.
“Following publication of this article, concerns were raised by third parties about the conclusions drawn by the authors based on the data provided,” according to the March 26 notice. After investigating, the publisher and the journal “concluded that the article contains major errors involving methods, theory, and normatively biased language,” which “bring into doubt the conclusions drawn by the authors,” the notice stated. The authors disagreed with the decision.
In a blog post, one author, Emil Kirkegaard, called the journal’s action “my first politically motivated retraction.” Kirkegaard’s studies and writings are provocative, on topics including race and IQ.
The authors of a paper on how motivation influences intelligence test scores have retracted their paper following the retraction of a 50-year-old study included in their analysis.
Part meta-analysis and part longitudinal study, “Role of test motivation in intelligence testing” appeared in Proceedings of the National Academy of Sciences in 2011. The meta-analysis portion included a 1978 paper by Stephen Breuning, a child psychologist who was the subject of 1987 report from the National Institute of Mental Health that found he “knowingly, willfully, and repeatedly” engaged in research misconduct and fabricated results in 10 NIMH funded articles.
As we reported earlier this year, six of Breuning’s papers have been retracted, including one last December. That article, published in 1978 in the Journal of School Psychology, found record albums, sporting event tickets, portable radios, and other incentives boosted scores on IQ tests.