Researchers Approach Publishers About Errors in 31 Papers | Nutrition Fit

0
292

[ad_1]

Retraction Watch readers may recall the name Jennifer Byrne, whose work as a scientific sleuth we first wrote about four years ago, and have followed ever since. In a new paper in Scientometrics , Byrne, of New South Wales Health Pathology and the University of Sydney, working along with researchers including Cyril Labbé, known for his work detecting computer-generated papers, and Amanda Capes-Davis, who works on cell line identification, describe what happened when they approached publishers about errors in 31 papers. We asked Byrne several questions about the work.

Retraction Watch (RW): You focused on 31 papers with a “specific reagent error.” Can you explain what the errors were?



Dr Jennifer Byrne

Jennifer Byrne (JB): We study nucleotide sequence reagents, which are short pieces of DNA or RNA that researchers use to study genes. In this paper, we focused on reagents for gene knockdown experiments, which aim to reduce gene function. These experiments rely on negative or so-called non-targeting controls that aren’t supposed to target any genes. In 2017, we first described gene knockdown papers where the intended non-targeting control corresponded to an active or targeting reagent. These were very surprising and serious errors.

Furthermore, we found the same incorrect controls across different papers, and we notified the relevant journals about these errors and other concerns. This eventually allowed us to compare how different journals responded to 31 gene knockdown papers that each described one of two incorrect control reagents. 

RW: What are the ramifications for cancer research if these papers are not corrected?

JB: These papers repeatedly claim that particular genes are important in different human cancer types. We worry that these papers could encourage further research, including more translational research involving patients. I also worry about the negative effects on researchers who try to follow up flawed research, particularly researchers who might lack self-confidence (which is pretty much everyone). Imagine repeatedly trying and failing to reproduce results that are reported across 6 or more gene knockdown papers. Most researchers would assume that they are the problem, not the 6 or more gene knockdown papers. This kind of experience can be very soul destroying, and risks driving people away from science. 

RW: How did the journals respond to the questions being raised?

JB: The journals responded quite differently — there was a mix of retractions, expressions of concern, author corrections, and one journal that decided to take no action at all. In terms of their response times, some journals were great. Others required a fair bit of prodding. We were of course disappointed by the journal that decided to take no action over 6 papers. However, at least this journal told us about their decision. We’re hoping that this paper might encourage other journals to reconsider whether they’re taking the right approach.

RW: You suggest that delays in journal responses may have partially been due to the complex nature of the issues you raised. How much do you think you can generalize from your select group of journals and issues in sequencing to other journals facing issues such as plagiarism, which can be noted by non-science-trained staff?

JB: We agree that reagent errors such as incorrect non-targeting controls may not be understood by non-science-trained journal staff. At the same, journals that have elected to publish papers about gene function need to understand very basic, ubiquitous molecular techniques. It’s not good enough for such journals to say “well, we’re not sure about this,” or “this is a matter of scientific opinion.” The use of valid experimental controls is not a matter of scientific opinion. Prof David Allison’s team have described broadly similar experiences with journals in the fields of nutrition and obesity. If a journal doesn’t understand the fundamental aspects of the research that it is publishing, that’s a serious problem.

RW: You point out that the same journal had different responses to your notifications, sometimes publishing corrections and sometimes retracting. Why do you think that was the case? 

JB: This could indicate that different staff with varying levels of molecular expertise within the journal handled these cases. At the same time, journals are clearly influenced by the arguments or explanations that are put forward by authors. In some cases, post-publication notices indicated that journals received different explanations for the same incorrect control reagent. We found this surprising, particularly as these incorrect reagents appeared to be sourced from one company. 

RW: One company acknowledged an error in their product specifications, but this is apparently not mentioned in many of the corrections and retractions. Since it seems that the company and reagents used in those papers were the same, why do you think the other notifications did not address this?

JB: In some cases, it appeared that different authors had access to different information about the incorrect reagent, with some authors also showing a poor understanding of non-targeting controls. This was unexpected, not least because a company would ordinarily provide the same reagent information to all their clients. In this case, Shanghai HollyBio, which has supported over 150 gene function studies according to Google Scholar, seems to have shown a surprising lack of interest in correcting papers that describe their incorrect control reagents.

RW: Are you aware of the STAR and RRID initiatives, designed to help researchers identify their reagents correctly? Would that help in this effort?

JB: Yes, we fully support these excellent initiatives. At the same, fraudulent research can comply with quality initiatives to provide a veneer of respectability. Sadly, reporting STAR methods and/or RRID’s does not prove that experiments were actually performed.

RW:  You and others have pointed out the likely involvement of paper mills in hundreds of papers, which has led to dozens of recent retractions. In the current paper, you discuss how these paper mills — which you describe as “undeclared assistance” — may indeed have played a role in causing the problems. Can you explain your evidence and reasoning?

JB: Firstly, a 2016 retraction stated that “experiments were sourced out to a biotechnology company,” which was not declared in the paper. This retraction linked a paper with an incorrect non-targeting control to a possible paper mill. This also made sense- incorrect non-targeting controls are such glaring mistakes that they seem unlikely to be made by gene knockdown experts. Secondly, although other papers were retracted from 2016-2019, 6/7 published corrections from 2018-2019 commonly substituted one incorrect sequence with the same corrected sequence.

This suggested that some paper mills have developed a form of “after-sales service,” providing information to “correct” errors. This was also supported by the apparent lack of proactive error correction by reagent suppliers, possibly to avoid drawing unnecessary attention to further papers. Prof Roland Seifert’s recent editorial stated that some communications between Naunyn-Schmiedeberg’s Archives of Pharmacology and authors may have involved paper mills. Our results suggest that paper mills may have been influencing published responses to errors in their papers since at least 2018.

RW: You suggest that standardizing the reporting of these issues to journals could make the process more efficient and lead to better correction of the literature, and “encourage more journals to proactively investigate publications.” In our experience, many journals lack the will to properly investigate. How would a standardized template overcome this resistance?

JB: We believe that a standardised error reporting template will chip away at this resistance. Right now, journals can easily say “we don’t understand what you’re saying here” or “we really needed this piece of information that you didn’t give us.” Any error reporting template needs to be designed in consultation with journals, so that the research community can supply the information that journals want or need. Journal responses to notified errors are clearly possible. The process now needs to be made easier, so that effective and timely journal responses will no longer be the exception, but the rule.



[ad_2]

Source link