Dilara Kiran, Colorado State University, Department of Microbiology, Immunology, Pathology*
Amanda L. Koch, Colorado State University, Department of Biochemistry and Molecular Biology*
Dylan M. Parker, Colorado State University, Department of Biochemistry and Molecular Biology*
Matthew Saxton, Colorado State University, Department of Biochemistry and Molecular Biology*
Clara A. Tibbetts, Colorado State University, Department of Chemistry*
Lindsay P. Winkenbach, Colorado State University, Department of Biochemistry and Molecular Biology*
*Authors contributed equally
Colorado State University - Science in Action
Keywords: scientific reliability, clinical studies, appropriations
Executive Summary
Scientific reliability is essential for continued scientific progress, development of novel technologies, and medical advances. Overall, the issues surrounding scientific reliability have been discussed for years; however, dedicated funding to support scientific reliability efforts has not been consistent. Past estimates show that 28 billion taxpayer dollars are spent on pre-clinical research that cannot be reproduced each year (Freedman, Cockburn, and Simcoe 2015). Lack of reliability in peer-reviewed scientific studies erodes public trust in science, hinders scientific advancement, and impedes the implementation of evidence-based policy. We propose a Congressional appropriations resolution which will mandate 0.25% of the National Institutes of Health (NIH) funds be utilized to conduct replication and reproducibility studies in the next fiscal year as a pilot study on reliability efforts (Figure 1). If the outcomes of this pilot study lead to significant enhancements in scientific progress, the proposed funding model can be expanded. Application across federal agencies has the potential to bolster the scientific enterprise through the robust funding of reliable research.
Statement of the Issue
The United States relies on science to drive global innovation in health and technology. When a federally-funded scientific study becomes the basis of policy or medical practice, there are direct impacts on human well-being. It is easy to see clear examples of this taking place during the 2020 COVID-19 pandemic (Liao et al. 2020). Not only are federal funds going to drug development and medical equipment engineering, federal funds are also being used in the public health sector to advise people on best practices during this outbreak. It is imperative that federally funded studies, such as those relating to SARS-CoV-2, be grounded on reliable research. Under these circumstances, scientific reliability becomes essential for continued scientific progression and societal benefit (National Academies of Sciences 2019; Klein et al. 2018).
Here, we define scientific reliability as encompassing both reproducibility and replicability; where reproducibility is reaching the same conclusions by reanalysis of the original data and/or code and replicability is achieving the same conclusion using new data and similar methods as prior studies (National Academies of Sciences 2019; Goodman 2016).
While the amount of federal and private funding lost to unreliable research is difficult to estimate, a 2015 study showed that 28 billion dollars a year is spent on preclinical research alone that cannot be recreated by other researchers (Freedman, Cockburn, and Simcoe 2015). Further, researchers at pharmaceutical companies have reported that their attempts to recreate the conclusions of peer-reviewed papers fail at rates upwards of 75% (Baker 2020). According to the most recent National Academies of Science, Engineering, and Medicine (NASEM) report on the topic of reproducibility and replication, studies in the natural, clinical, and social sciences are replicated as few as 20% of the time to no more than 75% of the time (National Academies of Sciences 2019, pg 85, Finding 5-5). For example, a study performed by the biotechnology firm, Amgen, found that 47 out of 53 'landmark' cancer biology studies were irreproducible (Begley and Ellis 2012). In addition, many scientists who conduct replicability or reproducibility studies do so as follow-up and do not report their results; therefore,the evidence base across all science and engineering fields for reliability is incomplete (National Academies of Sciences 2019, pg 85, Conclusion 5-3). Currently, there are limited incentives for researchers to conduct or present reliability studies. Replication work is neither novel nor flashy. Consequently, these types of studies do not guarantee success in the scientific community by commonly used metrics, such as a prolific publication record. Thus, a mandated, top-down approach is an integral step toward generating a structural shift in the current incentive framework. While the United States Government does not currently mandate the direct funding of studies focused on scientific reliability, pilot funding programs have been successfully implemented internationally and by charitable foundations (Arnold Foundation,“Grants Chart”; Baker 2020).
We propose a Congressional appropriations resolution which will mandate 0.25% of NIH funds allocated to the agency be utilized to conduct replication and/or reproducibility studies in the next fiscal year as a pilot study on scientific reliability efforts (National Institutes of Health (NIH) “Budget” 2014).The NIH is one of the world's premier medical research institutes. As the largest public funder of biomedical research in the country and world, we chose the NIH for this pilot study to exemplify and promote the movement for improved scientific integrity, rigor, and public accountability. This mandate will improve overall scientific rigor and reliability, which are essential for continued scientific development. Society stands poised to reap the benefits of increased scientific rigor, including better-informed public policy and the accelerated discovery of life-saving technology.
III. Policy Options
Option 1: The NIH provides additional funds for the replication and reproduction of every grant funded by the NIH.
Advantages
This requirement will increase overall scientific rigor. Mandating funds be used to replicate and reproduce prior research would have the additional positive effect of reducing fraudulent reporting, reducing bias, and ensuring appropriate methodology and data analysis platforms are utilized. Reliability studies help lend insight into the reason behind varied results, and can help researchers discern if there is potential misconduct, misinformation or if the variability is due to the inherent complexity of nature (National Academies of Sciences 2019). Having these studies mandated would save considerable time in discerning the cause of research that can not be replicated or reproduced. Furthermore, this eliminates the potential costs of future studies grounded in unreliable results. Thus, this option will also inspire increased public confidence in science.
Disadvantages
This requirement will impose an insurmountably high cost, including both time and physical resources. As evidence, in 2016, on the order of 115,000 scientific articles were published by the NIH (National Institutes of Health (NIH) “Our Knowledge” 2020). In addition, this enormous reallocation of focus to duplicating all ideas could stymie overall scientific enthusiasm, creativity, and progress.
Option 2: Allocation of 0.25% of the NIH budget to fund direct reliability studies via an appropriations resolution.
This allocation could be met via:
Requiring NIH grant applicants include planned “reliability of foundational work” sections in grants submitted primarily for original research. If prior reliability studies have been conducted on the grant topic, they should be mentioned within the grant application (National Institutes of Health (NIH) “Budget” 2014).
Funding specific grants explicitly for direct replication or reproduction of “cornerstone” research
Criteria for evaluating the impact of specific studies would be modeled on those instituted by the Netherlands Organization for Scientific Research (NWO) in 2016 (Baker 2020).
Advantages
This could provide increased public confidence in science and potential long term monetary savings, as less money will be spent assessing scientific direction after failed replication of published works (Freedman, Cockburn, and Simcoe 2015). Using best practices and standards to ensure scientific reliability would allow for an estimated savings of $14 billion per year, assuming half the cost of unreliable research is recovered by these improvements (Freedman, Cockburn, and Simcoe 2015). This option would still echo the advantages from option #1 while mitigating the cost disadvantages.
Disadvantages
This requirement could reduce the size and/or number of grants currently funded, since these reliability studies would not have been factored into previous NIH budgets. There is some uncertainty surrounding the proposed direct benefits of reallocating funds. Replication studies may not ameliorate the current situation, as successful replication does not guarantee the validity of the original scientific results, nor does a failed replication conclusively refute any original claims.
Option 3: No change to the current procedure for allocating funds to federal agencies.
Federal funding agencies traditionally rely on the body of research to validate itself. This relies on the robust nature of the scientific method being carried out ethically. In addition, this validation is helped by the nature of peer review publication and the process of research synthesis to generate a greater network of scientific knowledge (National Academies of Sciences 2019; Wible 2003).
Advantages
This option would provide no additional costs to the current research funding system.
Disadvantages
No improvement on the current situation would occur, and judgment on research reliability would fall upon the individual scientists which make up grant review boards and publication editors (Shiffrin, Börner, and Stigler 2018).
IV. Policy Recommendation
We recommend that policy option #2, requiring that the NIH appropriate 0.25% of federally funded dollars to reliability studies, be passed by the US Congress (Figure 1). This statutory requirement in Congressional appropriations will have the advantage of providing an increased level of rigor and reliability to the United States research endeavor while reducing the economic burden encumbered by various independent labs attempting to recreate numerous scientific results (“ASCB Task Force Report on Reproducibility in Science | ASCB Data” 2020). Federal agencies are perfectly aligned to drive this shift in scientific rigor and reproducibility. Using the fiscal year 2019-2020 NIH budget as a guideline, 0.25% equates to $100,000,000, which could fund 1,000 grants focused on scientific reliability work at $100,000 each (“NIH — Office of Budget —General Budget Information” 2020).
The decision to allocate 0.25% of the agency budget to this effort stems from cost estimates for replication in the field. For example, the Reproducibility Project: Cancer Biology, a project between the Center for Open Science and Science Exchange, works to independently replicate findings in preclinical cancer biology. The average cost of their first seven replication studies was $33,700 and the average time to completion after beginning experimental work was 6.5 months (Perfito, Tsui, and Iorns 2017) .These replication studies take considerably less time and money than the average NIH R01 grants of roughly 5 years and over $500,000 (“NIH Data Book - R01-Equivalent Grants” 2020). This has allowed researchers to identify which conclusions of the original studies were replicable, and which were not, shedding critical insight into the field of cancer biology. Additionally, a researcher who uses commercially available cell lines can validate their stock and conduct annual authentication of their stock for a cost of roughly $1,000, which is approximately 0.2% of the average NIH-funded academic research award (Freedman, Cockburn, and Simcoe 2015). In 2015, NIH funded approximately $3.7 billion worth of research that used cell lines (Freedman, Cockburn, and Simcoe 2015; “ASCB Task Force Report on Reproducibility in Science | ASCB Data” 2020). Determining which stocks have been misidentified or contaminated using validation and replication would allow for a significant increase in the return on the NIH investment in these types of projects (Eckers, Swick, and Kimple 2018).
Every effort should be made to drive a culture of sound science, but the current system fundamentally disincentivizes federally funded laboratories from “wasting” grant dollars on reproducibility or replication studies. By requiring the NIH to spend a proportion of their grant dollars on reliability studies, not only will the confidence in published studies increase, but the cost of research will be reduced.
Further, allowing the NIH to determine how to disburse reliability dollars, through either providing grants solely dedicated to replicability and reproducibility, or requiring grantees reproduce aspects of foundational research, will produce a culture of scientific research integrity that will allow the US to better drive global innovation. Ultimately, the cost of replicating a study is much less than the value of reaffirming a research result, and considerable time and money can be saved by ensuring future research is based on reliable, funded work. Ensuring reliability helps all researchers be confident in their scientific decisions and ideas. Therefore, Congress should pass an appropriations resolution that would require the NIH directly provide research money for reproducibility studies as proposed in option #2.
Figure 1: Schematic showing: (top red box) the monetary and societal effects of unreliable science and (bottom blue box) the positive impacts that would occur if Congress were to mandate, as an initial pilot study, the NIH spend 0.25% of the total yearly budget funding reliable science grants.
References
“ASCB Task Force Report on Reproducibility in Science | ASCB Data.” ASCB (blog). Accessed April 13, 2020. https://www.ascb.org/science-policy-public-outreach/advocacy-policy/ascb-task-force-reproducibility-in-science/.
Baker, Monya. “Dutch Agency Launches First Grants Programme Dedicated to Replication.” Nature News. Accessed April 13, 2020a. https://doi.org/10.1038/nature.2016.20287.
Baker, Monya. “Irreproducible Biology Research Costs Put at $28 Billion per Year.” Nature News. Accessed April 13, 2020b. https://doi.org/10.1038/nature.2015.17711.
Begley, C. Glenn, and Lee M. Ellis. 2012. “Raise Standards for Preclinical Cancer Research.” Nature 483 (7391): 531–33. https://doi.org/10.1038/483531a.
Eckers, Jaimee C., Adam D. Swick, and Randall J. Kimple. 2018. “Identity Crisis – Rigor and Reproducibility in Human Cell Lines.” Radiation Research 189 (6): 551–52. https://doi.org/10.1667/RR15086.1.
Freedman, Leonard P., Iain M. Cockburn, and Timothy S. Simcoe. 2015. “The Economics of Reproducibility in Preclinical Research.” PLOS Biology 13 (6): e1002165. https://doi.org/10.1371/journal.pbio.1002165.
Goodman, Steven N. 2016. “Aligning Statistical and Scientific Reasoning.” Science 352 (6290): 1180–81. https://doi.org/10.1126/science.aaf5406.
Klein, Richard A., Michelangelo Vianello, Fred Hasselman, Byron G. Adams, Reginald B. Adams, Sinan Alper, Mark Aveyard, et al. 2018. “Many Labs 2: Investigating Variation in Replicability Across Samples and Settings.” Advances in Methods and Practices in Psychological Science 1 (4): 443–90. https://doi.org/10.1177/2515245918810225.
National Academies of Sciences, Engineering. 2019. Reproducibility and Replicability in Science. https://doi.org/10.17226/25303.
“Our Knowledge.” 2014. National Institutes of Health (NIH). November 21, 2014.
https://www.nih.gov/about-nih/what-we-do/impact-nih-research/our-knowledge.
Perfito, Nicole, Rachel Tsui, and Elizabeth Iorns. 2017. “Practicalities of Conducting REPLICATION STUDIES.” Drug Discovery 18: 18.
Shiffrin, Richard M., Katy Börner, and Stephen M. Stigler. 2018. “Scientific Progress despite Irreproducibility: A Seeming Paradox.” Proceedings of the National Academy of Sciences 115 (11): 2632–39. https://doi.org/10.1073/pnas.1711786114.
Wible, James R. 2003. “The Economics of Science: Methodology and Epistemology as if Economics Really Mattered.” Routledge.
Comments