Peer-Reviewed Journals Ignoring FDA Inspection Violations

Only 4 percent of identifiable clinical drug trials in which the FDA determined significant evidence of fraudulent or unreliable data were eventually reported in peer reviewed journals. A study conducted by Charles Seife of the Arthur L. Carter Institute of Journalism at New York University revealed that information about studies with objectionable practices was not readily available, either from journals or the FDA website. Compounding this problem is the heavy redaction of documentation the FDA makes available to the public, meaning that identifying the particular trials with possible misconduct is impossible in many cases.

Inspections

The FDA inspects clinical trial sites during the drug approval process to ensure that scientists maintain good clinical practice standards and the rights of participants are protected. These inspections last several days and reveal information regarding the reliability and quality of data produced. There are three levels of inspection classifications, starting from no action indicated for no violations, voluntary action indicated for less serious violations, and official action indicated (OAI) for violations severe enough to warrant sanctions. In fiscal year 2013, 2 percent of 644 inspections resulted in an OAI classification. Although the FDA typically excludes data from an OAI site when considering drug approval, the agency does not systematically inform the scientific community of these findings.

Identification

The study used three methods to identify trials with an OAI rating. The FDA database (containing some, but not all inspection results) revealed information about 20 OAI inspections prior to August 8, 2012. Further Google searches of the FDA domain revealed information on 21 more OAI inspections, and the best documentation came from OAI ratings that resulted in regulatory action, which only happens when violations are particularly severe. Out of the 421 OAI inspections yielded by the three methods, only 101 trial sites could be properly identified due to heavy redaction. The study then focused on 57 identifiable trial sites with 78 articles published in peer-reviewed journals. Only 3 of these publications mentioned FDA inspection violations. Further, there was no identification of corrections, retractions, comments or notifications subsequently published after the FDA identified any violations.

Implications

The limitations of this study stemmed from FDA restriction of information, which is also the cause of the problem this research attempts to define. The database is not updated often, and the heavy redaction of accessible data makes identification of particular studies difficult to impossible. The FDA also learns of good clinical practice violations through other means, which may not be publicly reported. Further, the severity and impact of the violations is varied and the reports are often vague. The report suggests that the agency should use the national clinical trials database to note any OAI inspections while journals should be required to disclose adverse inspection findings.