At the United Nations General Assembly in September, President Trump highlighted his concerns about risky biological research. Significant questions remain, however, about how oversight of high-consequence research will be put into practice. Policymakers and researchers need a consistent and transparent way to weigh the risks and benefits of such research to facilitate review processes and oversight.
New RAND research provides the foundation for such a tool: structured scoring systems that apply well-defined criteria from past policies while adapting them to more modern biosecurity concerns, including emerging technologies. The tool provides a systematic way to evaluate proposed and ongoing research efforts, helping ensure oversight is evidence based.
In recent months, there has been renewed interest in heightening oversight of biological research. In May, President Trump released Executive Order 14292, which called for increased accountability and public transparency in dangerous gain-of-function research (DGoF). In July, the Senate Homeland Security and Governmental Affairs Committee approved the Risky Research Review Act, which proposed creating a Life Sciences Research Security Board to review federally funded “high-risk” research.
Gain-of-function research refers to experiments where scientists introduce new abilities or enhance existing properties of an organism.
Gain-of-function (GoF) research refers to experiments where scientists introduce new abilities or enhance existing properties of an organism. These methods are used in legitimate ways across many fields, like agriculture or cancer biology, and often do not involve pathogens. However, in policy discussions, GoF research has almost exclusively come to mean work that enhances pathogenic properties. The phrase gained more notoriety during the COVID-19 pandemic, as it was foundational to the ongoing discussion about whether the SARS-CoV-2 virus originated from GoF research in Wuhan, China.
The confusion over what dangerous gain-of-function (DGoF) means in policy has created uncertainty in what types of research can or cannot be conducted. This complicates efforts to regulate or oversee this research, as policies must prevent misuse while at the same time promoting legitimate beneficial scientific progress. Without precise definitions, important research could be inadvertently restricted, or genuinely dangerous work may escape appropriate scrutiny.
Concerns about balancing the benefits of biological research with its potential misuse are not new. In 2004, a National Academy of Sciences Report (known as the Fink Report) defined “dual-use research of concern” (DURC) as research that is useful for legitimate science but that can also be used for illegitimate, malicious purposes. The report identified seven categories of “experiments of concern” that should get greater scrutiny before they are undertaken. Since then, the U.S. government has released several policies related to DURC oversight, the most recent of which is the 2024 United States Government Policy for Oversight of Dual Use Research of Concern and Pathogens with Enhanced Pandemic Potential (PDF). The 2024 policy differentiates two categories of DURC based on whether or not the work involved generating a pathogen with enhanced pandemic potential (PEPP) by, for example, enhancing transmissibility or virulence. EO 14292 directs that the 2024 policy be revised or replaced.
EO 14292 defines DGoF research as “scientific research on an infectious agent or toxin with the potential to cause disease by enhancing its pathogenicity or increasing its transmissibility” and asserts the “[c]overed research activities are those that could result in significant societal consequences and that seek to achieve” seven research outcomes that qualify. These closely resemble the criteria in previous federal (PDF) and institutional (PDF) DURC policies and the Fink Report. Still, assessing the potential for significant societal consequences of DURC based on these subjective criteria alone will be challenging.
Even during the implementation of previous DURC/PEPP policies, researchers and their institutions lacked a standardized method of assessing their research beyond the general guidelines provided, and concerns over how to interpret other subjective criteria, such as the meaning of “reasonably anticipated” in the previous DURC and Potential Pandemic Pathogen Care and Oversight (PDF) policies, resulted in a definition for it being added to the 2024 policy. Through more than a decade of implementation, it has become clear that the application of subjective criteria to highly technical research proposals involving pathogens and toxins is difficult to do in a fully objective, transparent, and reproducible manner.
The application of subjective criteria to highly technical research proposals involving pathogens and toxins is difficult to do in a fully objective, transparent, and reproducible manner.
That is why we have begun to develop a standardized assessment framework and a tool for evaluating the risks and benefits of DGoF work. The assessment lays out the risks of potential DURC, including impacts on human health, pathogen transmissibility, medical countermeasure (MCM) availability, and societal/environmental consequences. The assessment also addresses the benefits to society from such research outcomes, including developing new MCMs or technologies, improving disease surveillance capabilities, advancing public health or the health of crops or livestock, informing public health policy, and generating new avenues for scientific discoveries. The tool can be used throughout the research life cycle to guide more objective and transparent assessments, identify mitigation strategies, and explore lower risk alternative approaches for research.
Continued dialogue between policymakers and scientists, and productive engagement with international partners will be essential to ensure oversight keeps pace with technological advances while protecting public health. As the federal government continues to consider the future for DGoF policies, ongoing research and refining objective and reproducible assessment tools can strike the right balance: preventing misuse while still supporting the science that drives innovations in medicine and biotechnology.