dogmadogmassage.com

Unveiling the Largest Scientific Fabrication Scandal

Written on

Chapter 1: The Initial Warning

In April 2000, the journal Anesthesia & Analgesia featured a letter from Peter Kranke and two of his colleagues that dripped with sarcasm. This trio of academic anesthesiologists took aim at a study authored by Yoshitaka Fujii, a Japanese researcher, whose findings on a drug intended to mitigate postoperative nausea and vomiting were described as “incredibly nice.”

In scientific parlance, labeling results as “incredibly nice” is not a compliment; it suggests that the researcher may be negligent or even fabricating data. However, rather than taking precautionary measures, the journal opted to publish the letter alongside Fujii's response, which provocatively questioned, “How much evidence is necessary for adequate proof?” In essence, Fujii was dismissing the concerns with a “Believe me or not, it's your choice.” Following this incident, Anesthesia & Analgesia continued to publish 11 additional papers by Fujii. One of the letter's co-authors, Christian Apfel, who was at the University of Würzburg in Germany at the time, alerted the U.S. Food and Drug Administration (FDA) about the issues raised but received no response.

Recognizing his narrow escape from scrutiny, Fujii largely ceased publishing in anesthesia-related literature after the mid-2000s. Instead, he redirected his focus toward ophthalmology and otolaryngology, fields less likely to scrutinize his work. By 2011, he had compiled over 200 studies—a prolific output for someone in his area of expertise. However, in December of that year, he published what would turn out to be his final paper in the Journal of Anesthesia.

In the following two years, it became increasingly evident that Fujii had fabricated a significant portion of his research—most of it, in fact. Today, he holds the notorious record for the most retracted publications by a single author, with a staggering total of 183 retractions, accounting for approximately 7 percent of all retracted papers from 1980 to 2011. His narrative serves as both a remarkable fall from grace and an illustration of the new tools available for detecting academic fraud.

Section 1.1: The Role of Statistical Analysis

Steve Yentis had a solid foundation in research ethics when he took on the role of editor-in-chief for Anaesthesia in 2009. Having chaired an academic committee on the subject and pursued a master's degree in medical ethics, he was well-equipped for the position. However, he admitted he had “no real inkling that the ceiling was about to fall in” just after he began leading the journal. Similar to Anesthesia & Analgesia a decade prior, Anaesthesia published a 2010 editorial critiquing Fujii’s work, written by authors skeptical of its validity, calling for thorough literature reviews to eliminate fraudulent findings.

Yentis later recounted that the editorial, which he had commissioned, sparked a deluge of letters, including one from John Carlisle, a U.K. anesthetist, lamenting that Fujii's work distorted the evidence base and challenging the editors of anesthetic journals to take action.

The timing was fortuitous; the anesthesiology field was still grappling with the fallout from two significant misconduct cases. The first involved Scott Reuben, a Massachusetts pain specialist who fabricated data in clinical trials and subsequently faced federal imprisonment. The second was Joachim Boldt, a prolific German researcher found guilty of manipulating studies, leading to the retraction of nearly 90 papers.

Section 1.2: A Challenge Issued

When Yentis read Carlisle’s letter, he saw an opportunity and invited Carlisle to perform an analysis of Fujii's work. Carlisle admitted he lacked expertise in statistics at that time and was not a well-known anesthesiologist. Nevertheless, his conclusion was striking: it was exceedingly improbable that genuine experiments could yield Fujii's data.

Top-tier evidence in clinical medicine usually comes from randomized controlled trials, which serve as statistical filters to distinguish between chance and genuine effects of a treatment. Carlisle explained, “The measurements usually analyzed follow a treatment versus placebo design.” He proceeded to analyze differences in variables present before treatment, such as weight, and calculated the likelihood that the observed differences were due to random chance.

By examining 168 of Fujii's “gold standard” clinical trials from 1991 to 2011, Carlisle uncovered significant discrepancies. His analysis revealed that the odds of Fujii's findings stemming from legitimate experiments were around 10?³³—an astronomically small number. He noted “unnatural patterns” that indicated the data deviated significantly from what random sampling would yield, essentially concluding: If it appears too good to be true, mathematics will likely reveal it as such.

Statistical analysis of Fujii's research findings

Chapter 2: The Unraveling

Carlisle’s findings were reminiscent of the concerns raised by the anesthesiologists in 2000, but this time they garnered attention. Soon after his paper was published, a Japanese investigation concluded that only three out of 212 of Fujii's papers contained reliable data. Evidence of fraud was inconclusive for 38 others, while 171 papers were classified as entirely fabricated. The Japanese report concluded: “It is as if someone sat at a desk and wrote a novel about a research idea.”

Carlisle’s statistical methods are applicable beyond anesthesia and can be used in various scientific fields. “The method I used can be applied to anything—plants, animals, or minerals,” he stated. Moreover, it would be relatively straightforward for other scholarly journals to adopt such techniques.

At least one journal editor concurs. “Though still evolving, John Carlisle’s approach is gaining traction as a tool for detecting research fraud,” noted Steven Shafer, a Stanford anesthesiologist and the current editor-in-chief of Anesthesia & Analgesia. Shafer, along with Yentis and others, is engaged in this effort, with plans for Carlisle to publish an updated methodology soon. A key aim, Shafer remarked, is to automate the detection process.

Unfortunately, the challenge of catching fraudsters remains. Carlisle pointed out that organizations such as the Cochrane Collaboration could utilize his methods to verify the reliability of pooled results. However, for such an approach to be effective, journal editors must be on board—a requirement that is often unmet.

Authors frequently claim to be victims of “witch hunts,” and it can take a chorus of critiques on platforms like PubPeer.com, followed by media coverage, to prompt action. In a 2009 case, Bruce Ames, renowned for his tests on carcinogenic agents, conducted an analysis similar to Carlisle's on three papers authored by Palaninathan Varalakshmi. Unlike the outcomes of Carlisle’s investigation, the authors vehemently defended their work, with their editors siding with them. To this day, none of the journals that published Varalakshmi's papers have addressed the concerns raised.

The challenges in pursuing scholarly fraud stem partly from the academic publishing process itself, which relies on individual integrity rather than systematic checks. Yentis noted that the peer review process has its advantages and disadvantages, but detecting fraud is not one of its strengths.

Publishing is fundamentally built on trust, and peer reviewers often lack the time to thoroughly examine original data, even when it is made available. For instance, Nature asks authors to justify their statistical tests and report if their data meets the necessary assumptions, but editors do not systematically review all underlying datasets.

During a troubling stem cell paper retraction last year, which tragically led to the suicide of a key researcher, Nature maintained that “we and the referees could not have detected the issues that ultimately undermined the papers.” The journal claimed that it relied on post-publication peer review and institutional investigations, underscoring the complexities involved in tackling fraud in academia.

Yentis, reflecting on the past, acknowledged that while he had commissioned the editorial highlighting red flags in Fujii's work, he allowed its significance to fade. It required multiple letters—including one from Carlisle—to spur him into action, resulting in the definitive analysis being published only in 2012. “If such an accusation were to appear in an editorial now,” he remarked, “I would not let it go unnoticed.”

Image of the authors of the study on Fujii's misconduct

Share the page:

Twitter Facebook Reddit LinkIn

-----------------------

Recent Post:

Unveiling the Discovery of a Black Hole Consuming a Star

Astronomers uncover that a supermassive black hole devoured a star decades ago, revealing new insights into tidal disruption events.

Improving Daily Life: 16 Effective Strategies for Better Living

Discover 16 practical strategies to enhance your daily life and boost productivity through mindfulness and healthy habits.

Harnessing the Dynamic Features of Excel: Boosting Your Spreadsheets

Discover how Excel's dynamic arrays enhance data management and analysis, simplifying workflows and improving productivity.